Techonology

Why i feel AGI

Here are some things that I believe about artificial intelligence:

I believe that in the last several years, the AI ​​system has started crossing humans in many domains – mathematics, Coding And medical diagnosis, just to make some names – and that they are getting better every day.

I believe that very soon in 2026 or 2027, but possibly as soon as this year or more AI companies will claim that they have created an artificial general intelligence, or AGI, which is usually defined as “a general-revered AI system that can do almost all cognitive functions that can do almost all cognitives.”

I believe that when the AGI is announced, the definitions and arguments will be debated whether it is counted as “real” as AGI, but they will not matter mostly, because wide point-that we are losing our monopoly on human-level intelligence, and in this we are changing a world with very powerful AI systems-this will be true.

I believe that in the next decade, powerful AI will generate trillion dollars in economic value and bends the balance of political and military power towards nations that control it – and that most governments and large corporations already look at it clearly, as they are spending to reach there, which are evidence of the huge amount of money.

I believe that most people and institutions are not fully prepared for the AI ​​system today, give it to more powerful people alone, and that there is no realistic plan at any level of the government to reduce risks or catch the benefits of these systems.

I believe strict AI skeptics – who emphasize that progress is all smoke and mirror, and who reject AGI as an illusory imagination – not only wrong on merit, but giving people a false sense of security.

I believe that you think AGI will be great or terrible for humanity – and honestly, it will be too early to say – its arrival raises important economic, political and technical questions, for which we currently have no answer.

I believe that the right time to start preparations for AGI is now.

It can all look crazy. But I did not reach these ideas as a wire -looking futurist, an investor who was hypiring my AI portfolio or a person who took a lot of magic mushrooms and saw “Terminator 2”.

I reached him as a journalist who has spent a lot of time talking to engineers creating powerful AI systems, investors are funding it and researchers are studying its effects. And I am confident that what is happening in AI now is bigger than understanding most people.

In San Francisco, where I am based, AGI’s idea is not fringe or foreign. People here talk about “Feeling AGI,” and building a smart-to-human AI system has become the clear goal of some of the largest companies of Silicon Valley. Every week, I meet engineers and entrepreneurs working on AI, who tell me that change-big changes, world-boundary changes, the kind of changes we have never seen before-is around the corner.

“In the last one or two years, which was called the ‘short timeline’ (thinking that the AGI probably would be built this decade) has become a near-senses,” Miles Bundage, an independent AI policy researcher, who left the OpenIA last year, told me recently.

Outside the Gulf region, some people have also heard of AGI, starting planning for it alone. And in my industry, journalists who take AI progress, still take risks seriously Innocent Or Industry,

Honestly, I get a response. However, now we have AI systems contributing to Nobel laureate successes, and even if 400 million people a week There are using chat, many AIs who encounter in their daily lives are a nuisance. I sympathize with those who sees AI Slope at their Facebook feed, or the customer has a clumsy conversation with the service chatbot and thinks: it Is going to capture the world?

I also ridiculed the idea. But I am confident that I was wrong. Some things have persuaded me to take AI Pragati more seriously.

The most disorientation about today’s AI industry is that the closest people of technology – employees and officers of major AI labs – are most concerned how fast it is improving.

This is quite unusual. Back in 2010, when I was covering the rise of social media, no one inside Twitter, Foucwear or Pinterest was warning that their apps could cause social chaos. Mark Zuckerberg was not testing Facebook so that it was evident that it could be used to create a novel Biovipone, or a autonomous cyber attack.

But today, people with the best information about AI progress-people who build AI and have access to more-desired systems than the general public-are telling that big changes are near. Major AI companies are Active preparation For the arrival of AGI, and are studying the potential scary qualities of their models, as they are capable Conspiracy And DeceptionIn anticipation of his becoming more competent and autonomous.

The Chief Executive of OpenaiI Sam Altman has written The “system that starts pointing to AGI is seen.”

Demis Hasabis, Chief Executive Officer of Google Deepmind, Where is AGI probably “three to five years away.”

Dario Amodi, Chief Executive Officer of Anthropic (who does not like the word AGI, but agrees with the general principle) told me last month that he believed that we believed that we were “a very large number of AI system which were much more intelligent than humans on almost everything.”

Perhaps we should exempt these predictions. Ultimately, AI officials stand for benefit from inflated AGI promotion, and may encourage exaggeration.

But a lot of independent experts – Geoffree Hint and including Yoshu bengioThe world’s two most influential AI researchers, and Ben Buchanan, who were the top AI experts of the Biden administration – are saying similar things. So the other chief has a host Economists, Mathematicians And National security officer,

to be fair, Some specialist It is suspected that AGI is imminent. But even if you ignore everyone working in AI companies, or have an inherent stake in the result, there are still sufficient reliable independent voices with short AGI deadline that we should take them seriously.

For me, as the opinion of the expert is inspiring, the evidence is that today’s AI systems are improving quickly, in the ways that are quite clear for anyone.

In 2022, when Openai released the chatgpt, the major AI models struggled with basic arithmetic, often failed to complex logic problems and often “hallucinations”, or no one. That era’s chatbots can do impressive things with the right signal, but you will never use for anything seriously important.

Today’s AI models are much better. Now, putting special models Medal winner level score The International Mathematics on the Olympiad, and the general-purpose model has become so good in solving the complex problem that we have to make new, difficult tests to measure their abilities. Happiness and factual mistakes still occur, but they are rare on the new model. And many businesses now rely on the AI ​​model so that they can be made in core, customer-honoring tasks.

(New York Times sued Openai and his partner, Microsoft, accusing them of violation of copyright violations of news material related to the AI ​​system. Openai and Microsoft denied claims.)

Some improvement is a function of scale. In AI, large models, trained using more data and processing power, produce better results, and today’s major models are much larger than their predecessors.

But it also stems from successes that AI researchers have created in recent years – mostly, the arrival of the “argument” model, which is designed to take an additional computational step before reacting.

The argument model, which includes O1 of OpenAI and R1 of Dipsek, is trained to work through complex problems, and is made using reinforcement learning – a technique that was used to teach AI Play board games At a supernatural level. They appear to be successful on things that trip the previous model. (Just an example: A standard model released by Openai, GPT-4o, scored 9 percent on AIME 2024, extremely difficult competition a set of mathematics problems; O1, a logic model that Openai Issued Several months later, scored 74 percent in the same test.)

As these devices improve, they are becoming useful for many types of white collar knowledge work. My colleague Ezra Klein recently wrote that the output of Deep Research of Chatgpt, a premium feature that produces complex analytical briefs, was the “least medium” of human researchers with whom he worked with.

I have also found many uses for the AI ​​tool in my work. I do not use AI to write my columns, but I use it for a lot of other things – preparing for interviews, summarizing research papers, building individual apps to help me in administrative tasks. None of this was possible a few years ago. And I guess that anyone who uses these systems regularly for severe work can conclude that they have killed a plateau.

If you really want to understand how much better AI has achieved recently, talk to a programmer. A or two years ago, AI coding equipment was present, but was more aimed at sharpening the human couder than replacing them. Today, software engineers tell me that AI does most of the real coding for them, and they feel fast that their job is to monitor the AI ​​system.

Jreddman Freedman, a partner in Y Combinator, a Start-up accelerator, Recently said A quarter of the current batch of the accelerator used AI to use AI to write all their code.

“A year ago, he made his product with scratches – but now 95 percent of it is made by AI,” he said.

In the sense of humility of the epidemic, I should say that I, and many others, can be wrong about our deadline.

The AI ​​progress will hit a bottleneck that we were not expecting – an energy lack that AI companies preventing AI companies from manufacturing big data centers, or limited access to powerful chips used to train AI models. Today’s model architecture and training techniques may not take us all the way to AGI, and need more successes.

But even if the AGI comes after a decade, I instead of 2036 – instead of 2036 – I believe that we should start preparing for it now.

मैंने जो सलाह दी है, उनमें से अधिकांश मैंने सुना है कि कैसे संस्थानों को एजीआई फोड़े के लिए तैयार करना चाहिए, जो हमें वैसे भी करनी चाहिए: हमारे ऊर्जा बुनियादी ढांचे का आधुनिकीकरण करना, हमारे साइबरसिटी डिफेंस को सख्त करना, एआई-डिज़ाइन की गई दवाओं के लिए अनुमोदन पाइपलाइन को तेज करना, सबसे गंभीर एआई हार्म्स को रोकने के लिए विनियम लिखना, स्कूलों में एआई साक्षरता को पढ़ाना और जल्द ही सामाजिक और To predict emotional development. These are all sensible ideas, with or without AGI

Some technical leaders are worried that premature apprehension about AGI will cause us to regulate AI very aggressively. But the Trump administration has indicated that it wants to speed up AI development, not to slow down it. And enough money is being spent to create the next generation of the AI ​​model – with hundreds of billions of dollars, more on the way – that is unlikely that major AI companies will voluntarily pump brakes.

I do not worry about individuals for AGI, either. A big risk, I think, most people will not feel that the powerful AI is even until it is staring at them – ending their jobs, entering them in a scam, damaging them or someone they love. This, broadly, what happened during the social media era, when we failed to recognize the risks of equipment such as Facebook and Twitter until they were not too large and got entangled to change.

So I believe in taking the possibility of AGI seriously, even if we do not know properly when it will come or it is okay in what form it will take.

If we are in denial – or if we are simply not paying attention – we can lose a chance to shape this technique when it matters most.

,
#feel #AGI

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *