While some experts have called for an “IAEA for AI,” it is important to consider the limitations of this model for AI governance.
In November 2023, nations at the first global AI Safety Summit recognized the possibility of “serious, even catastrophic harm” from advanced artificial intelligence (AI). Some of the risks identified stem from deliberate misuse. For example, a nation could decide to instruct an advanced AI system to develop novel biological weapons or cyberweapons; Anthropic CEO Dario Amodei testified in 2023 that AI systems would be able to greatly expand threats from “large-scale biological attacks” within two to three years. Other risks mentioned arise from unintentional factors—experts have warned, for instance, that AI systems could become powerful enough to subvert human control. A race toward superintelligent AI could lead to the creation of highly powerful and dangerous systems before scientists have developed the safeguards and technical understanding required to control them.
Many proposals to mitigate these risks have focused on the importance of international coordination. The recent White House national security memorandum on AI, for example, directs the Department of State to form an international AI governance strategy that outlines multilateral engagement with allies, partners, and competitors. As international AI governance discussions advance, nations may consider how certain kinds of dangerous AI development could be restricted and how such agreements could be verified.
Accordingly, some scholars—and public figures such as OpenAI CEO Sam Altman—have turned to the International Atomic Energy Agency (IAEA) as a potential model for international AI institutions. But is an “IAEA for AI” desirable?
International institutions like the IAEA serve an important function, but they also have limitations that should be considered when thinking about international AI governance. To demonstrate the strengths and weaknesses of this model, I examine several case studies of the IAEA and the similarly structured Organization for the Prohibition of Chemical Weapons, or OPCW (which is responsible for the verification and monitoring of chemical weapons). I focus on how these organizations responded to challenges in Iran, Syria, and Russia. These examples illustrate that IAEA for AI proposals must account for the well-documented challenges faced by the IAEA and OPCW.
The Importance of Verifiable International Agreements
To mitigate the “race to God-like AI,” Ian Hogarth—chair of the U.K. AI Safety Institute—proposed an “Island model,” in which a joint international lab performs research on superintelligence in a highly secure facility. Demis Hassabis, CEO of Google DeepMind, recently expressed support for a similar model, claiming that a “CERN [European Organization for Nuclear Research] for AI” would be the “best path for the final few years of the artificial general intelligence project, to improve safety.”
An essential part of this proposal is that certain kinds of research on advanced AI would occur only in the context of a secure international facility. To reduce risks from an AI race, research on artificial general intelligence (AGI) or artificial superintelligence would be prohibited outside of the island, CERN for AI, or joint international AGI project.
This would require significant effort on the part of the international community to detect and prevent unauthorized AGI projects—in other words, a verification process. In a previous piece, I discussed how a “trust but verify” approach—similar to the approach used by President Reagan when negotiating nuclear deals with the Soviet Union—could help nations form agreements about advanced AI. International agreements will require verification methods that allow nations to ensure that others are complying (and provide confidence that they would be able to detect noncompliance).
The IAEA Model
The IAEA presents a clear verification methodology that could be used as a model for AI governance. The organization is responsible for promoting nuclear security by preventing nuclear proliferation, promoting the peaceful use of nuclear energy, and verifying compliance with the Treaty on the Non-proliferation of Nuclear Weapons (NPT). Relative to other international organizations, the IAEA is somewhat unique in that it plays an important role in monitoring and verification. Whereas bodies like the Intergovernmental Panel on Climate Change (IPCC) and CERN primarily serve a research function, the IAEA is one of the main international bodies responsible for ensuring that companies are not violating international agreements, setting standards and conducting inspections to ensure that nations are complying with obligations under the NPT.
Scholars have previously examined how international institutions for AI could serve a variety of functions; examples include facilitating expert consensus on opportunities and risks, bringing together leading researchers to advance AI safety research, and monitoring or enforcing compliance with regulations. Previous work has identified several functions of the IAEA that could be applied to AI: the establishment of safety standards, monitoring compliance and conducting inspections, maintaining an emergency response system, and facilitating information-sharing.
While IAEA for AI proposals vary, they generally refer to an institution that would be responsible for ensuring safe AI development, setting and monitoring safety standards, and ensuring that the benefits of advanced AI are distributed globally. Some have proposed that the IAEA for AI would monitor advanced AI hardware and verify that advanced chips are being used for safe purposes, just as the IAEA monitors the distribution and usage of uranium.
The IAEA is a particularly useful case study as AI governance experts examine how compliance with AI agreements can be robustly verified. Previous proposals, like Chips for Peace have considered how the U.S. and its allies could form agreements regulating domestic frontier AI development, sharing the benefits of AI, and setting and enforcing export controls. Other proposals have sought to avoid “racing against [competing countries] as quickly as possible,” emphasizing that cooperation with U.S. adversaries could be feasible, especially if AI hardware mechanisms can be applied to verify agreements.
The IAEA also plays an important role in incident reporting and emergency preparedness. The IAEA helps coordinate information-sharing during emergency scenarios and enables countries to ask for assistance during nuclear emergencies. It maintains an Incident and Emergency Center to help spread information quickly in the event of a nuclear crisis. A similar entity could help nations prepare for potential AI-related emergency scenarios by ensuring that crisis communication channels function adequately and that AI developers have emergency response protocols.
Limitations of an IAEA for AI
While the IAEA serves several functions that could be desirable in AI governance, there are limitations to blindly applying the IAEA approach to AI. For example, scholars have pointed out that the international community required “consensus around the credibility and definition of a global challenge” in order to form the IAEA. The use of atomic weapons during World War II made the risks of atomic weapons clear and salient. In the context of advanced AI, credible experts are highly concerned about global security threats from advanced AI, but there is still much debate about if, when, and how such threats might unfold.
Suppose, however, that nations eventually reach a stronger consensus about global security risks from advanced AI. At that point, world leaders would be forced to grapple with serious questions about international coordination. In that scenario, should the IAEA be used as a template for successful international coordination on science and security, a cautionary tale about the inefficacy of international institutions, or something in the middle?
To answer these questions, it’s important to understand how the IAEA works in practice. For example: How does the IAEA decide that a member has been noncompliant with its obligations? What powers are granted to the IAEA? What happens if the IAEA suspects a nation of illegally developing nuclear weapons? What happens if a nation refuses to provide access to IAEA inspectors? Are there other international organizations that serve a similar function but with a different institutional design?
My colleagues and I recently attempted to answer these questions in our paper “Governing dual-use technologies: Case studies of international security agreements and lessons learned for AI governance.” Through case studies, we examined how the IAEA could be valuable when considering international AGI governance efforts. We also examined other international security agreements, such as the OPCW, which oversees the governance of the Chemical Weapons Convention.
Below, I highlight how the IAEA and OPCW reacted to instances of noncompliance: instances in which a country was suspected of violating its nuclear (IAEA) or chemical weapon (OPCW) obligations. Examining the responses to these challenges can reveal a better understanding of whether and how to apply principles from these institutions to international AI governance.
IAEA Case Study: The Iran Nuclear Deal
The IAEA itself does not punish or discipline countries that are found to be illegally developing nuclear weapons—it only performs technical research, conducts inspections, and evaluates evidence. It does not have the authority to intervene directly. However, it can issue recommendations to the UN Security Council and member nations.
This suggests that the enforcement of nuclear security often relies on the preferences of individual nations, especially global superpowers. For example, in the late 1990s and early 2000s, Iran secretly started an illegal nuclear program. When these efforts were revealed by Iranian whistleblowers, the IAEA conducted inspections and found Iran to be in violation of the NPT.
The IAEA reported Iran to the UN Security Council, and the Security Council recommended a series of economic sanctions. The sanctions triggered international negotiations with Iran, culminating in the Joint Comprehensive Plan of Action (JCPOA, colloquially referred to as the “Iran nuclear deal”). Under the JCPOA, Iran agreed to reduce investments in its nuclear program and consented to extensive IAEA inspections in exchange for the lifting of economic sanctions.
The JCPOA went into effect in 2016, but the United States withdrew in 2018. The first Trump administration argued that the agreement did not go far enough, gave too much to Iran in exchange for too little, and lacked sufficient mechanisms for inspection and verification. Advocates of the Iran nuclear deal argued that these claims were false, and the issue became politicized within the United States. When the United States reimposed sanctions on Iran, the JCPOA was substantially weakened.
Notably, however, the JCPOA did not fully dissolve—other nations have continued to support it, and negotiations continued to attempt to bring the United States back into the deal. The Iran case highlights that international agreements are sensitive to changes in international attitudes and political leadership. It also demonstrates that deals are not “black or white”—they sometimes exist in a gray area of ongoing negotiations and trades.
OPCW Case Study: Chemical Weapons in Syria and Russia
The OPCW serves a similar function to the IAEA, but in the realm of chemical weapons—it verifies compliance with the Chemical Weapons Convention. The OPCW has the authority to send inspectors to any member state; it performs regular inspections and special inspections if a nation is suspected of illegally possessing or using chemical weapons. A special kind of inspection called a “challenge inspection” can be launched under short notice—within 12 hours of notification.
Like the IAEA, the OPCW relies largely on the UN Security Council and member nations for enforcement. The OPCW has the ability to remove a member nation; for serious violations and serious penalties, the OPCW will issue recommendations to the UN Security Council.
In 2021, Syria was found to be in violation of the Chemical Weapons Convention. The OPCW suspended Syria’s privileges in the OPCW. However, enforcement of stricter provisions was blocked in the UN Security Council. Russia—one of the five permanent members of the UN Security Council that possesses the power to veto decisions—pledged its support to Syria.
Moreover, in 2024, the United States determined that Russian forces had used chemical weapons against Ukrainian troops in violation of the Chemical Weapons Convention. Russia lost its seat on the OPCW and faced sanctions from the United States, but there was no official OPCW investigation into the alleged Russian violations.
This case study highlights the important role that geopolitical superpowers play in enforcing (or blocking the enforcement of) international agreements; the conflict in Ukraine further illustrates how nations may violate international agreements in times of military conflict.
Lessons for International AI Agreements
In considering international agreements to govern the development of advanced AI, what lessons can be drawn from organizations like the IAEA and OPCW?
First, international institutions provide an important verification function. The IAEA and OPCW have employed scientists from around the world to conduct rigorous inspections, providing the international community with valuable information. In high-stakes situations, like the investigations of Iran in the 2000s, a trusted body’s ability to provide “innumerable oral reports, technical briefings, and bilateral and multilateral consultations” can be essential.
Second, the enforcement of international agreements can be challenging—especially during times of rising geopolitical tensions. International agreements are important diplomatic tools, but they do not completely replace the need for national buy-in. If a major world power changes its position on core issues, it can withdraw from agreements, as demonstrated by the first Trump Administration’s withdrawal from the JCPOA. If a major power disagrees with other nations about a specific issue (e.g., Syria’s use of chemical weapons) or a broader conflict (e.g., the legitimacy of Russia’s invasion of Ukraine), these disagreements can threaten enforcement processes that require consensus.
Third, it is unrealistic to expect compliance to remain constant. International agreements ebb and flow based on the attitudes and incentives of nations. There are often decades of relatively strong compliance but also periods of friction and tension. Fortunately, in the context of AGI governance, this does not rule out the value of international agreements. The goal of international agreements should not be to permanently delay the creation of advanced AI—rather, the focus should be on safely and securely performing research to understand and control advanced AI systems.
No one knows exactly how long it would take a joint lab or CERN for AI to conduct enough research to safely navigate a transition to advanced AI. But we do know that the joint lab will not have an indefinite amount of time. Hogarth’s island may have 10, 30, or 50 years to investigate how to safely and securely build superintelligent AI systems. The exact time frame will be determined by geopolitical factors, the degree of tension between great powers, the amount of resources nations are willing to invest in verification and enforcement, and technical breakthroughs related to advanced AI.
Overall, international AI agreements—like all international agreements—will require national buy-in. Leaders of member nations need to be convinced that agreements concerning advanced AI are worth prioritizing as a global security challenge. There have already been several efforts designed to raise awareness and build consensus around these threats. In a recent international dialogue between Chinese scientists and Western scientists, the group penned the following statement: “In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology.”
As AI progress advances and world leaders pay more attention to AI capabilities and risks, there may be a window in which major world powers seriously consider international AGI agreements. For now, efforts to build consensus about risks and explore the potential advantages and drawbacks of international coordination are essential.
– Akash Wasil is a senior research associate at the Center for International Governance Innovation (CIGI). His work focuses on the intersection of AI and national security. Before working in AI policy, Akash was a National Science Foundation Graduate Research Fellow at the University of Pennsylvania, where his research focused on innovative applications of technology in mental health care.
– Published courtesy of Lawfare.