OpenAI’s Latest Model Shows AGI Is Inevitable. Now What?

The question is no longer whether artificial general intelligence will arrive, but whether we’ll be ready when it does.

OpenAI’s Latest Model Shows AGI Is Inevitable. Now What?
An artist’s illustration of the concept of Artificial General Intelligence (AGI). (Photo: Domhnall Malone/Google Deepmind/Pexels, https://tinyurl.com/lfagi, Free Use)

Last week, on the last of its “12 Days of OpenAI,” OpenAI unveiled the o3 model for further testing and, eventually, public release. In doing so, the company upended the narrative that leading labs had hit a plateau in AI development. o3 achieved what many thought impossible: scoring 87.5 percent on the ARC-AGI benchmark, which is designed to test genuine intelligence (human performance is benchmarked at 85 percent). To appreciate the magnitude of this leap, consider that it took four years for AI models to progress from zero percent in 2020 to five percent earlier in 2024. Then, in a matter of months, o3 shattered all previous limitations. 

This isn’t just another AI milestone to add to a growing list. The ARC-AGI benchmark was specifically designed to test what many consider the essence of general intelligence: the ability to recognize patterns in novel situations and adapt knowledge to unfamiliar challenges. Previous language models, despite their impressive capabilities, struggled on some tasks like solving certain math problems—including ones that humans find very easy. o3 fundamentally breaks this barrier, demonstrating an ability to synthesize new programs and approaches on the fly—a crucial stepping stone toward artificial general intelligence (AGI).

The implications are profound and urgent. We are witnessing not just incremental progress but a fundamental shift in AI capabilities. The question is no longer whether we will achieve AGI, but when—and more importantly, how we will manage its arrival. This reality demands an immediate recalibration of policy discussions. We can no longer afford to treat AGI as a speculative possibility that may or may not arrive at some undefined point in the future. The time has come to treat AGI as an inevitability and focus the Hill’s regulatory energy on ensuring its development benefits humanity as a whole.

The o3 Breakthrough: More Than Just Another Model

Prior to OpenAI’s o3 breakthrough, there were widespread predictions of a pending slowdown in AI. A little over a year ago, the Harvard Business Review queried, “Has Generative AI Peaked?” This summer, Fast Company announced, “The first wave of AI innovation is over.” And just last month, an article in Axios posited: “AI’s ‘bigger is better’ faith begins to dim.” 

The announcement of o3 has proven Fast Company right—but for different reasons. One partial explanation behind the model’s path-breaking capacity is a novel reinforcement learning method in which trained o3 to “think” through its response at greater length before responding to a prompt. This additional emphasis on reasoning has created a more methodical model. As o3 works through a prompt it not only considers related prompts but also spells out its analysis of the prompt as it derives its response. This strategy results in improved accuracy and less frequent hallucinations. Moreover, o3 has demonstrated a greater ability to handle novel tasks—rendering the model more useful across complex fields. The empirical improvements are remarkable: back in 2020, ChatGPT-3 earned a 0 percent on the ARC benchmark; ChatGPT-4o climbed to 5 percent; o3 earned a 75.7 percent at a limited level of compute and 87.5 percent at a higher amount of compute. 

Though OpenAI leads the pack, Google and its own reasoning model is not lagging too far behind. Sundar Pichai, Google’s CEO, boasts the forthcoming Gemini 2.0 Flash Thinking model is the “most thoughtful” one Google has developed to date. Anthropic has plans of its own to push the AI frontier further in 2025. Progress on reasoning by multiple labs suggests more breakthroughs will follow.

The New Frontier of AI Risk

OpenAI also introduced a new approach to improving model safety. The new approach, which the company calls “deliberative alignment” involves “directly teach[ing] reasoning LLMs the text of human-written and interpretable safety specifications, and trains them to reason explicitly about these specifications before answering.” OpenAI hopes this new approach will help to ensure that o3 and subsequent models more closely adhere to the company’s safety specifications. 

The question of whether the most powerful AI systems can be made safe will become crucial in the coming months as the labs persist in introducing even more capable models. OpenAI has not yet completed safety testing on o3. Even o1, OpenAI’s previous model released earlier this month, showed significantly enhanced capabilities in risky domains—capabilities that o3, being more advanced, is likely to amplify further. For example, o1 scored substantially higher on “tacit knowledge” and “troubleshooting” questions relating to biological wet lab work. Previously, these questions were thought to require significant real-world—and thus human—training in biology. Tacit wet lab knowledge, once thought to be one of the most important barriers to using advanced AI to create bioweapons, may become increasingly accessible to these models.

Likewise, a pair of studies released by frontier labs in December showed that, as AI systems advance, they become more able and willing to strategically undermine their users’ goals. Both studies show that, if frontier systems are given goals, they may resist having those goals changed, including by being turned off or replaced with an updated system. Some of the strategies that frontier systems employ to avoid having their goals thwarted include: “models strategically introduc[ing] subtle mistakes into their responses, attempt[ing] to disable their oversight mechanisms, and even exfiltrat[ing] what they believe to be their model weights to external servers.”

These worrying findings all relate to AI systems that, as of this week, are far behind the frontier. We do not yet know how good o3 is at assisting in creating bioweapons, or perpetrating cyberattacks, or synthesizing chemical weapons, or persuading humans, or any number of other dual-use capabilities. But if o3 represents as big an advance in these areas as it has in others, we should be concerned. And even if not, o3 suggests that capabilities will continue to race forward.

The Next AI Wave 

The performance of o3 represents something far more significant than just another benchmark record: It demonstrates the power of the new scaling law that OpenAI discovered with its o-series models. Previous advances in AI largely came from scaling up existing architectures: bigger models, more data, and more compute. While impressive, this approach delivered diminishing returns, especially on tasks requiring genuine intelligence rather than pattern matching. GPT-4, despite being vastly larger than GPT-3 in terms of resources spent on its training and development, made only modest gains on the ARC-AGI benchmark.

The o-series changes this calculus entirely. First with o1, and even more dramatically with o3, OpenAI has successfully implemented a fundamentally new approach: using reinforcement learning to guide program synthesis through natural language search. In simpler terms, while previous models could only follow patterns they’d seen before, these new models can actively search for and construct new solutions to novel problems. This isn’t just a better engine—it’s a whole new vehicle, and o3’s performance shows just how powerful this vehicle can be. One of the creators of the ARC Challenge, François Chollet, acknowledged that the release of o3 marked a “surprising and important step-function increase in AI capabilities.” He did qualify, however, that o3 has yet to satisfy the true test of AGI: “when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.” 

More breakthroughs are on the horizon. Hardware companies are developing specialized AI chips that could make today’s expensive computations orders of magnitude cheaper and faster. Researchers are exploring hybrid approaches combining neural networks with symbolic reasoningAdvances in robotics and embodied AI could provide new ways for models to learn from physical interaction. Each of these represents a potential new scaling law—a new path for exponential improvement.

Additional acceleration comes from how these advances compound: hardware breakthroughs make it cheaper to train larger models, which enables testing new architectural innovations, which in turn suggests ways to build better hardware. This cascading effect creates feedback loops of accelerating progress. The path to AGI isn’t a single, steady climb—it’s multiple exponential curves feeding into each other. o3’s dramatic leap forward is a crucial reminder that breakthrough capabilities can emerge suddenly when multiple advances converge. Other convergences will follow. Readying ourselves for the implications of such moments will become more serious and more difficult as more and more actors play a role in accelerating AI’s progress.

The Democratization Factor

While OpenAI currently leads in demonstrating advanced AI capabilities, history suggests this advantage will be temporary. Right now, o3 remains under careful control, with OpenAI implementing a gradual rollout and extensive safety testing. But the fundamental advances that make o3 possible—its architectural innovations and approach to program synthesis—will inevitably spread throughout the AI ecosystem.

We’ve seen this pattern before. GPT-3’s capabilities seemed unique and proprietary when it launched in 2020, but within two years, open-source alternatives like BLOOM and Meta’s LLaMA had matched or exceeded its performance. The same happened with image generation: DALL-E 2’s seemingly magical capabilities were quickly matched by Stable Diffusion and Midjourney. In each of these instances, what began as a closely guarded breakthrough became a widely available technology.

This democratization is already underway in the new wave of AI progress. Meta’s latest models show similar capabilities to early o-series systems, and other labs—both commercial and academic—are rapidly closing the gap. They have several incentives to keep forging ahead. The ARC Prize Foundation’s 2025 competition explicitly aims to produce open-source solutions matching o3’s performance. Open-source solutions may also benefit from less hostile regulatory scrutiny and greater venture interest. Once these capabilities exist in the open, they can be built upon by anyone, anywhere.

The global nature of AI research accelerates this diffusion. Teams in China, Europe, and elsewhere are pursuing parallel paths to AGI, often with different approaches and priorities. A breakthrough in one lab quickly inspires new directions in others.

The economic implications are profound. While o3’s current compute costs are currently high—$17–20 per task—these will plummet as hardware improves and implementations become more efficient. Just as smartphone technology went from luxury to ubiquity in a decade, advanced AI capabilities will likely become widely accessible much faster than expected.

This democratization creates both opportunities and risks. On the one hand, widespread access to powerful AI could drive innovation and economic growth across sectors. On the other, it means that any potential risks or misuse cases will be harder to control. More actors using more sophisticated AI models may likewise exacerbate known negative externalities of AI such as incredible energy use. We need frameworks that maximize the benefits of this inevitable diffusion while mitigating its dangers. The window for establishing such frameworks is closing rapidly. AI labs have shown a willingness to proceed without the guardrails they purport to support.

The Pattern of Underestimation

One of the most consistent patterns in artificial intelligence isn’t technological—it’s psychological. Time and again, experts and observers have underestimated both the pace of progress and the magnitude of breakthroughs. When GPT-3 launched in 2020, many thought we were years away from models that could write coherent essays. When ChatGPT appeared in late 2022, the consensus was that reasoning and coding abilities were still distant goals. When GPT-4 arrived in 2023, surely that represented the ceiling of what was possible with current approaches.

o3 continues this pattern of shattering expectations. Just a day before o3’s release, the notion of achieving near-human performance on the ARC-AGI benchmark seemed like science fiction. The previous state-of-the-art score of 5 percent appeared to confirm the limitations of current AI approaches. Even optimists thought we might need fundamentally new paradigms, years of research, or computational resources beyond our reach. o3 didn’t just inch past these limitations—it obliterated them.

This persistent underestimation stems from what psychologists call “exponential growth bias”: our difficulty in grasping exponential progress. Human intuition is linear: we naturally expect tomorrow to look much like today, with incremental improvements building slowly over time. But technological progress, especially in AI, follows exponential curves. Each advance builds upon all previous advances, creating accelerating feedback loops. What seems impossible today can become routine tomorrow.

Even now, current predictions about AGI timelines likely remain too conservative. The convergence of multiple scaling laws, the rapid improvement in hardware, and the potential for unexpected breakthroughs all suggest that progress could be far faster than anyone expects. When we look back at today’s estimates, they may seem as quaint as 2020’s predictions about language models.

Consider this: if four years ago someone had predicted that by 2024 an AI model would score over 75 percent on ARC-AGI, they would have been dismissed as wildly optimistic. Yet here we are. The question isn’t whether we’ll see similarly dramatic leaps in the coming years, but how many, and in what directions. Anyone still betting against rapid progress toward AGI hasn’t been paying attention to the pattern.

Policy Implications

The o3 breakthrough should move debates about artificial general intelligence from the theoretical to the concrete. Whether o3 qualifies as “true AGI” under various academic definitions is largely irrelevant—it demonstrates capabilities that raise all the practical concerns AGI debates were meant to address. We have a system that can handle novel situations, reason through complex problems, and generate new solutions on the fly. More importantly, it represents an architectural approach that could enable recursive improvement, with each iteration potentially helping design its successor. 

The practical challenges posed in theory by AGI—autonomous learning, rapid capability scaling, and potential loss of human control—are already here in some cases and very close in other cases. These systems will become widely available, as we’ve seen with every previous AI breakthrough, and will be deployed across every sector of society, transforming economies and institutions.

The emergence of o3-level AI capabilities demands immediate regulatory action. Self-governance will not suffice. Just last week, a few days before releasing o3, OpenAI CEO Sam Altman stated his preference to wait for a federal testing framework before releasing another reasoning model. His preferences must have shifted quickly given his decision to go ahead with o3’s launch. Such shifts make for unreliable policy. Regulatory action should proceed on three critical fronts. First, proactive governance frameworks can guide AI development before, not after, critical thresholds are crossed. These frameworks must balance innovation with safety while remaining flexible enough to adapt to rapidly changing capabilities. Simply reacting to each new breakthrough—as Congress has largely done until now—is no longer sufficient given the pace and scale of advancement.

Second, regulatory action must face unprecedented challenges in global coordination. AGI development is happening simultaneously across multiple countries and companies, with different priorities and values. Without coordinated oversight, we risk a race to the bottom in safety standards or, conversely, regulatory fragmentation that stifles beneficial progress. No single nation can effectively govern a technology that will be developed and deployed globally.

Third, Congress and relevant regulators, such as the EEOC, the CFPB, and FTC, must accelerate economic and social preparation for AGI. The arrival of AGI-level systems will transform labor marketsreshape economic power structures, and challenge fundamental social institutions. Federal legislators and regulators need concrete plans for managing this transition: new educational approaches that prepare workers for an AI-driven economy, social safety nets that account for potential job displacement, and mechanisms to ensure the benefits of AI advances are broadly shared rather than concentrated among a few powerful actors. There is a finite and narrowing window for establishing these frameworks—once these technologies are widely deployed, it will be far more difficult to implement effective governance.

These regulatory endeavors need not (and should not) thwart responsible AI research and development. Underlying each of the aforementioned actions is a focus on gathering information, increasing transparency, and ensuring broad societal buy-in. If done right, those regulatory efforts can actually accelerate the creation and diffusion of societally-beneficial AI. Trust in AI companies has taken a nosedive among Republicans, Democrats, and, more generally, publics around the world. Advances in AI may not result in their potential real-world benefits if the public opposes AI’s integration into daily affairs and critical decisions. Put differently, regulators should avoid undermining the innovation that is pushing OpenAI and other labs forward while also acknowledging that future progress is intertwined with public support and assurances that societally harmful uses of AI will be detected and prevented.

* * * 

The question is no longer whether AGI will arrive, but whether we’ll be ready when it does. o3’s breakthrough performance shows us that artificial general intelligence isn’t science fiction—it’s an emerging reality that demands immediate attention. Every month spent debating its possibility rather than preparing for its arrival makes the challenge of responsible development more difficult.

A clear choice has emerged: proactively develop governance frameworks, coordinate globally, and prepare our societies for unprecedented change or continue with business as usual, letting ourselves be repeatedly surprised by exponential progress until we’re forced to react to capabilities we haven’t prepared for. The window for choosing the first path is closing rapidly. The time for action is now.

Kevin FrazierAlan Z. RozenshteinPeter N. Salib, AP News

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.