Grok Showed the World What Ungoverned AI Looks Like

Grok Showed the World What Ungoverned AI Looks Like
A digital world map (via Getty Images)

The 2026 International AI Safety Report, published in early February by over 100 experts from more than 30 countries, reached a sobering conclusion: the gap between the pace of AI advancement and our ability to implement effective safeguards remains a critical challenge. The report’s chair, Turing Award winner Yoshua Bengio, put it plainly: international agreement on AI governance is now in the rational interest of every country, mirroring “exactly what has happened with the management of nuclear risks.”

This is not abstract. Indeed, we already have a case study in what happens when that coordination does not exist.

Last December, xAI’s chatbot Grok began generating thousands of nonconsensual sexualised images per hour, including images of minors. Users discovered they could upload photographs of real people and instruct the AI to “undress” them. Governments issued statements and regulators announced investigations. But nobody effectively stopped it, nor could they have without effective multilateral coordination.

What followed was a textbook case of fragmented response. Malaysia and Indonesia      banned Grok outright. Britain accelerated enforcement of the Online Safety Act, launching an Ofcom investigation (led by the United Kingdom’s regulator for communication services). France widened an existing inquiry and raided X’s offices in Paris. India demanded compliance reports. Brazil’s chief prosecutor called for X to stop Grok from producing sexualized content within five days or face legal action. The European Commission ordered X to preserve all internal documents related to Grok over doubts about compliance, while 57 members of the European Parliament called for bans on “nudification” tools under the AI Act. California’s attorney general sent a cease-and-desist letter to xAI. And U.S. senators wrote to Apple and Google requesting the removal of X from app stores.

xAI’s response was to comply by preventing Grok creating sexualized deepfakes in jurisdictions where it is illegal (as discussed in this previous Just Security article). The company was saying the quiet part out loud: xAI would do the minimum required, country by country, because no coordinated international standard exists to require otherwise.

AI Harms Risk Growing Without International Coordination

The problem here is not a lack of concern; after all, major jurisdictions responded within days. The issue is that AI systems like Grok operate globally while governance remains national and largely uncoordinated. Each country acted according to its own timeline, legal framework, and enforcement capacity. None could act decisively on behalf of others and so what resulted was a regulatory patchwork where Grok could continue generating sexualized deepfakes for users in some jurisdictions.

This is a clear failure of infrastructure rather than political will. The diplomatic channels for rapid, coordinated international response to AI harms simply do not exist, unless a particular harm happens to also fall into specific, pre-existing categories (for instance, matters of cybercrime that benefit from the Budapest Convention’s 24/7 Network coordination mechanism—but even this is only available to States parties to that convention, and for specific law enforcement cooperation purposes). When Indonesia banned Grok, it could not compel action elsewhere. When Britain’s Ofcom demanded answers, its jurisdiction ended at the English Channel. And when California’s attorney general invoked state law, he was one voice among many.

Some will object that the United States presents a special case. Free speech protections there make content regulation genuinely difficult, and reasonable people disagree about where to draw lines. While that is true, the Grok incident was not about speech at the margins. Child sexual abuse material is not protected expression anywhere. This was, in every sense, an easy case. And the global response still failed.

That should concern us. Indeed, if this is how we handle situations where everyone agrees that something is wrong, what happens when the questions get harder?

The coordination failures visible in Grok exist across the AI landscape—and the new International AI Safety Report documents them in detail. Competitive pressure pushes labs to ship faster and cut corners on safety, even when individual leaders would prefer to be cautious. No credible mechanisms exist to verify claims about model capabilities, training runs or safety measures, which makes mutual trust and treaties difficult to sustain. Frontier labs lack standard protocols for reporting serious incidents, so problems often stay siloed until they become public scandals.

The stakes will only increase. The report notes that current AI systems can already assist non-experts in designing dangerous biological agents—with 23 percent of the highest-performing biological AI tools having high misuse potential—and are being adapted into semi-autonomous cyber attackers. These are documented capabilities. This means the question is less about whether harder cases will arrive and more about whether the infrastructure to address them will exist when they do.

International coordination forums are sometimes dismissed as talk shops where diplomats exchange pleasantries while real power lies elsewhere. This criticism misses the point. Of course, dialogue alone cannot, and does not, solve problems. But without trusted channels between governments, industry and civil society, we will keep watching harms unfold without effective mechanisms to respond collectively.

The India Summit and Tracks for International AI Coordination

At the India AI Impact Summit in New Delhi that took place from Feb. 16-20, OpenAI CEO Sam Altman said the world “urgently” needs something like the International Atomic Energy Agency (IAEA) for AI—a body capable of rapidly responding to changing circumstances. The aspiration is right as the IAEA is about trust and verification, which is exactly what global labs—especially in the United States and China—need right now in order to start collaborating on AI risk management. But unless we lay out a fast mechanism for onboarding nations, we cannot afford to wait for the decade-long ratification arc that brought the IAEA itself into being.

The Grok incident illustrates why the most urgent need is not a new institution, but a new operational layer: specifically, a standing Rapid Response Network among the national AI Safety Institutes that already exist in the United Kingdom, United States, European Union, Japan, Singapore, India, and Canada, but also including China—with binding commitments to exchange incident data within 24 to 48 hours whenever an AI harm crosses borders. In parallel, major economies should conclude bilateral and multi-lateral AI incident notification agreements—the equivalent of nuclear hotlines, requiring no new treaty body and activatable within months—so that when a regulator in one jurisdiction identifies a systemic harm, counterparts elsewhere are legally obligated to respond within their domestic capabilities rather than left to act voluntarily and asynchronously, as happened with Grok.

The “domestic capabilities” qualifier matters. Jurisdictions will inevitably bring different authorities, regulatory frameworks, and enforcement options to the table. As the Grok episode makes plain, some will criminalize or regulate AI harms far more aggressively than others—reflecting not only legal culture but political will. The long-term answer to that divergence is, of course, sweeping multilateral conventions that create shared criminalization and enforcement capabilities globally; but such instruments take years, sometimes decades, to negotiate, ratify, and implement. The notification agreements proposed here are designed to work in the interim and below that threshold. At a minimum, they could stipulate a floor-level response obligation—one feasible for all signatories regardless of their current domestic frameworks—so that no party is simply left to act voluntarily and asynchronously, as happened with Grok.

The natural convening structure for both tracks already exists (provided that the China AI Development and Safety Network is properly integrated): the International Network of AI Safety Institutes, built through Bletchley, Seoul, Paris, and now New Delhi, can establish the rapid-notification architecture, while the OECD AI Principles framework—adopted by over 50 countries—provides the interoperability layer that translates shared standards into mutual recognition of safety assessments, closing the regulatory arbitrage window that permissive jurisdictions currently exploit.

These are the foundations on which a more ambitious IAEA-style International AI Agency, now being informally discussed in academic and policy circles, can eventually be built—but the foundations must be laid now, before the harder cases arrive.

Conclusion

Several efforts to build this foundation are underway and there is precedent for success. In 2023, 16 major AI companies—including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, though notably not xAI—signed on to voluntary safety commitments coordinated by the White House, agreeing to shared standards on security testing, information sharing, and content watermarking. The Frontier Model Forum emerged from that process to develop industry-wide safety practices. More recently, the Coalition for Content Provenance and Authenticity is developing technical standards for authenticating AI-generated media.

The findings of the International AI Safety Report informed discussions at the India Summit. As the report’s authors note, “The value of this Report is not only in the findings it presents, but in the example it sets of working together to navigate shared challenges.” That example needs to become the norm.

The alternative is to continue as we are: responding to each incident ad hoc, jurisdiction by jurisdiction, while AI systems advance faster than our collective capacity to govern them. That path leads to a world where the most permissive rules set the floor, where companies arbitrage between regulatory regimes, and where the next Grok-style incident is merely a preview of worse to come.

Many will see the Grok incident as an aberration. But we see it as a demonstration. The technology worked as designed, the harms were predictable, and the global response was exactly what the current system allows—which is to say, not much. Whether we build the coordination capacity to respond differently next time is a choice. And it is one we must make now.

, Published courtesy of Just Security

No Comments Yet

Leave a Reply

Your email address will not be published.

©2026 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.