Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


AI has evolved at an astonishing pace. What seemed like science fiction just a few years ago is now an undeniable reality. Back in 2017, my firm launched an AI Center of Excellence. AI was certainly getting better at predictive analytics and many machine learning (ML) algorithms were being used for voice recognition, spam detection, spell checking (and other applications) — but it was early. We believed then that we were only in the first inning of the AI game.

The arrival of GPT-3 and especially GPT 3.5 — which was tuned for conversational use and served as the basis for the first ChatGPT in November 2022 — was a dramatic turning point, now forever remembered as the “ChatGPT moment.” 

Since then, there has been an explosion of AI capabilities from hundreds of companies. In March 2023 OpenAI released GPT-4, which promised “sparks of AGI” (artificial general intelligence). By that time, it was clear that we were well beyond the first inning. Now, it feels like we are in the final stretch of an entirely different sport.

The flame of AGI

Two years on, the flame of AGI is beginning to appear.

On a recent episode of the Hard Fork podcast, Dario Amodei — who has been in the AI industry for a decade, formerly as VP of research at OpenAI and now as CEO of Anthropic — said there is a 70 to 80% chance that we will have a “very large number of AI systems that are much smarter than humans at almost everything before the end of the decade, and my guess is 2026 or 2027.”

image2
Anthropic CEO Dario Amodei appearing on the Hard Fork podcast. Source: https://www.youtube.com/watch?v=YhGUSIvsn_Y 

The evidence for this prediction is becoming clearer. Late last summer, OpenAI launched o1 — the first “reasoning model.” They’ve since released o3, and other companies have rolled out their own reasoning models, including Google and, famously, DeepSeek. Reasoners use chain-of-thought (COT), breaking down complex tasks at run time into multiple logical steps, just as a human might approach a complicated task. Sophisticated AI agents including OpenAI’s deep research and Google’s AI co-scientist have recently appeared, portending huge changes to how research will be performed. 

Unlike earlier large language models (LLMs) that primarily pattern-matched from training data, reasoning models represent a fundamental shift from statistical prediction to structured problem-solving. This allows AI to tackle novel problems beyond its training, enabling genuine reasoning rather than advanced pattern recognition.

I recently used Deep Research for a project and was reminded of the quote from Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” In five minutes, this AI produced what would have taken me 3 to 4 days. Was it perfect? No. Was it close? Yes, very. These agents are quickly becoming truly magical and transformative and are among the first of many similarly powerful agents that will soon come onto the market.

The most common definition of AGI is a system capable of doing almost any cognitive task a human can do. These early agents of change suggest that Amodei and others who believe we are close to that level of AI sophistication could be correct, and that AGI will be here soon. This reality will lead to a great deal of change, requiring people and processes to adapt in short order. 

But is it really AGI?

There are various scenarios that could emerge from the near-term arrival of powerful AI. It is challenging and frightening that we do not really know how this will go. New York Times columnist Ezra Klein addressed this in a recent podcast: “We are rushing toward AGI without really understanding what that is or what that means.” For example, he claims there is little critical thinking or contingency planning going on around the implications and, for example, what this would truly mean for employment.

Of course, there is another perspective on this uncertain future and lack of planning, as exemplified by Gary Marcus, who believes deep learning generally (and LLMs specifically) will not lead to AGI. Marcus issued what amounts to a take down of Klein’s position, citing notable shortcomings in current AI technology and suggesting it is just as likely that we are a long way from AGI. 

Marcus may be correct, but this might also be simply an academic dispute about semantics. As an alternative to the AGI term, Amodei simply refers to “powerful AI” in his Machines of Loving Grace blog, as it conveys a similar idea without the imprecise definition, “sci-fi baggage and hype.” Call it what you will, but AI is only going to grow more powerful.

Playing with fire: The possible AI futures

In a 60 Minutes interview, Alphabet CEO Sundar Pichai said he thought of AI as “the most profound technology humanity is working on. More profound than fire, electricity or anything that we have done in the past.” That certainly fits with the growing intensity of AI discussions. Fire, like AI, was a world-changing discovery that fueled progress but demanded control to prevent catastrophe. The same delicate balance applies to AI today.

A discovery of immense power, fire transformed civilization by enabling warmth, cooking, metallurgy and industry. But it also brought destruction when uncontrolled. Whether AI becomes our greatest ally or our undoing will depend on how well we manage its flames. To take this metaphor further, there are various scenarios that could soon emerge from even more powerful AI:

  1. The controlled flame (utopia): In this scenario, AI is harnessed as a force for human prosperity. Productivity skyrockets, new materials are discovered, personalized medicine becomes available for all, goods and services become abundant and inexpensive and individuals are freed from drudgery to pursue more meaningful work and activities. This is the scenario championed by many accelerationists, in which AI brings progress without engulfing us in too much chaos.
  2. The unstable fire (challenging): Here, AI brings undeniable benefits — revolutionizing research, automation, new capabilities, products and problem-solving. Yet these benefits are unevenly distributed — while some thrive, others face displacement, widening economic divides and stressing social systems. Misinformation spreads and security risks mount. In this scenario, society struggles to balance promise and peril. It could be argued that this description is close to present-day reality.
  3. The wildfire (dystopia): The third path is one of disaster, the possibility most strongly associated with so-called “doomers” and “probability of doom” assessments. Whether through unintended consequences, reckless deployment or AI systems running beyond human control, AI actions become unchecked, and accidents happen. Trust in truth erodes. In the worst-case scenario, AI spirals out of control, threatening lives, industries and entire institutions.

While each of these scenarios appears plausible, it is discomforting that we really do not know which are the most likely, especially since the timeline could be short. We can see early signs of each: AI-driven automation increasing productivity, misinformation that spreads at scale, eroding trust and concerns over disingenuous models that resist their guardrails. Each scenario would cause its own adaptations for individuals, businesses, governments and society.

Our lack of clarity on the trajectory for AI impact suggests that some mix of all three futures is inevitable. The rise of AI will lead to a paradox, fueling prosperity while bringing unintended consequences. Amazing breakthroughs will occur, as will accidents. Some new fields will appear with tantalizing possibilities and job prospects, while other stalwarts of the economy will fade into bankruptcy. 

We may not have all the answers, but the future of powerful AI and its impact on humanity is being written now. What we saw at the recent Paris AI Action Summit was a mindset of hoping for the best, which is not a smart strategy. Governments, businesses and individuals must shape AI’s trajectory before it shapes us. The future of AI won’t be determined by technology alone, but by the collective choices we make about how to deploy it.

Gary Grossman is EVP of technology practice at Edelman.



Source link

About The Author

Scroll to Top