The key to global AI safety is continued American leadership in AI innovation, not more international treaties.
Rapid advances in artificial intelligence (AI) “foundation models” capabilities have brought conversations about AI to the forefront of global discourse. With a few exceptions, innovation in AI has been driven by, or is dependent on, American industry, science, infrastructure, and capital. Palantir’s CEO recently characterized this dominance in remarks at the Reagan National Defense Forum: “America is in the very beginning of a revolution that we own. The AI revolution. We own it. It should basically be called the U.S. AI revolution.”
However, many foreign governments—including those not implementing or building AI—are asking for a seat at the table to decide how this emerging powerful technology should be governed. While these conversations are likely to continue, policymakers must remain cognizant of the current advantage that the U.S. maintains in the development of AI. Technological dominance is a key component of America’s national security: Limiting the country’s ability to innovate and lead in AI would have serious implications for the country’s long-term interests.
American Innovation Must Continue to Lead
AI is a transformative technology that will reshape global economies and how citizens around the globe interact with the world around them. The widespread implementation of AI will bring vast benefits, from modernizing national defense to improving economic productivity and transforming the public sector, health care, and science. Countries and firms that maintain control over AI and its respective supply chains will be well positioned to obtain significant strategic advantages, thereby growing in power and influence.
Today, the U.S. is the clear leader in the development of AI and is likely to be the largest recipient of its benefits as a result. This dominance has led many observers to raise concerns about American companies’ dominance in the global AI ecosystem. The European Union, in particular, has highlighted the critical need for “digital sovereignty,” and leaders such as France’s President Emmanuel Macron have called for more effort in developing AI across the EU. Others have argued that the concentration of power in a handful of American companies is dangerous, killing future competition in the digital economy. In a world where a single country or a handful of companies control AI, a growth in global inequality could follow. Rachel Adams, CEO of the Global Center on AI Governance, advanced this position in a recent article in Foreign Policy, arguing that due to AI, “global inequality is now set to rise” and that “the rest of the world, which faces critical barriers to adopting AI, will be left further and further behind.”
In addition to equality concerns, an increasingly influential group argues that AI represents an “existential” threat to humanity. Beyond hypothesized end-of-the-world scenarios, proponents of this idea have made claims that AI might disrupt democratic processes or empower bad actors to develop biological weapons or launch increasingly capable cyberattacks. From this perspective, AI is viewed as similar to other chemical, biological, radiological, nuclear, and explosive (CBRNE) technologies. The solution to all of these problems is, apparently, new international agreements and controls to regulate the future of AI.
Though this may sound scary, recent research paints a different story, demonstrating instead that concerns over AI’s impact on democracy are overblown and that AI is unlikely to lead to new CBRNE threats. Even more, AI is already being used to expedite scientific processes, improve health-care outcomes, and positively transform how governments operate. In war, AI is providing the United States with new advantages over authoritarian-aligned adversaries by helping bolster cyber defenses, driving advances in autonomous capabilities, and improving the efficiency of the defense workforce. Continued leadership in AI will have clear benefits not only for the U.S. but also for the international community at large.
But this American dominance is not a given; it is important to remember that other countries, such as China, are actively working to undermine this lead and hobble the technological superiority of the United States. As countries continue to transform into new tech-enabled states, their reliance on and interest in developing domestic technology and AI capabilities will continue to grow as well. Failing to do so would lead them to a path of international obsolescence.
As a result, the same dynamic that drove the pursuit of technological superiority during the Cold War is driving much of today’s efforts to develop advanced AI capabilities, leaving “every country on its own on AI.” Therefore, any new international agreement that could potentially limit the development or diffusion of American AI—however good the intentions may be—must be approached from this perspective, paying specific attention to potential unintended consequences. In the worst instance, getting new international agreements that intend to advance global peace, security, or prosperity wrong could undermine those exact goals.
The Growth of AI Safety Fora and Accords
There are competing ideas on what these global agreements on the “safe” development and implementation of advanced AI systems might look like. Sam Altman of OpenAI famously called for the creation of a new AI organization modeled after the International Atomic Energy Agency (IAEA). The United Nations, in turn, created a High-Level Advisory Body on Artificial Intelligence to advance “globally coordinated AI governance” and in a recent major report said, “AI governance regimes must also span the globe to be effective.” The report listed dozens of amorphous risks that AI would supposedly amplify, which the UN hoped to address through a new AI office and international scientific panel to create and coordinate new global standards. Meanwhile, key nations have also sketched out potential AI governance frameworks through the G7 Hiroshima Process and the Global Partnership on AI.
On AI safety specifically, the United Kingdom organized the first AI Safety Summit during a time when much of the global AI safety dialogue was dark and foreboding—with one media outlet referring to the summit as a “doom-obsessed mess.” Since then, reporting has revealed that these early concerns were pushed by specific interest groups that sought to pause or slow down the development of AI. Since then, South Korea and the United States have hosted their own AI Summits, with France set to host the next in early 2025. In Seoul, leading AI-developing firms agreed to a set of voluntary commitments. The upcoming gathering in Paris has been billed as an “action summit,” intending to start advancing more binding requirements.
Acknowledging the importance of the American technology industry in the emerging age of AI, both the European Union and the UK AI Safety Institute have opened offices in California to interface with and lobby American companies directly, which has raised concern with some U.S. lawmakers. These efforts are indicative of the issues that American technology companies are facing today. As global calls for regulation and agreements on AI development and deployment continue to grow, the American technology industry must navigate an increasingly complex web of regulations, voluntary commitments, and best practices originating from both within the United States and abroad.
The European Union, especially, is positioning itself as a global regulatory superpower, having passed the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the Digital Markets Act (DMA), and, more recently, the Artificial Intelligence Act. These efforts have led to the creation of a “Brussels Effect,” where the EU has found itself in a position to export its digital regulations across the globe. While the EU is undeniably a leader in the regulation of technology, this has also led to numerous AI companies refusing to release their models to European consumers.
This response to expanding U.S. leadership in global AI markets is understandable, but the alternative is almost certainly worse. Beijing continues to directly counter U.S. tech dominance, particularly in the Global South, by growing its “digital footprint” through global investments in telecommunication networks and hardware, data centers, cloud services, and various other forms of digital infrastructure. This expansion has been pushed through China’s expanding Digital Silk Road effort, which is part of its Belt and Road Initiative and has included other financial and health-related components. The ultimate goal, as the Atlantic Council summarizes, is to “shape the global AI ecosystem according to its [China’s] own terms, which risks undermining international norms and values on privacy, transparency, and accountability.” As other countries lack the technology industry and capabilities to match China’s investments, responsibility falls on the U.S. to foster innovation and liberal values in AI and related emerging tech sectors. While some countries may fear falling behind both the U.S. and China, a world where China calls the shots on AI’s trajectory will have serious implications for the current liberal and democratically aligned world order.
The Impossibility of Global AI Safety Alignment
Recently, the Biden administration signed the U.S. on to a new treaty titled the “Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law,” which says signatories “shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention.” This latest convention and many previous efforts espouse broad principles to ensure “AI alignment” with human rights, democratic principles, the rule of law, and various other priorities. Few would disagree with the importance of “aligning” AI with values such as democracy and transparency, but thus far, many of these proposals have focused on aspirational principles and ambiguous plans with little clarity on how they could be implemented.
Even more problematic is how these values translate into concrete and tangible policy goals. What does it mean to “align” AI to ensure that it is “safe” or “fair”? Who gets to decide what “safe” or “fair” means? Any global accord that limits the capabilities of AI systems could also directly or indirectly limit the speech and knowledge-enhancing capabilities of AI. In the case of the United States, the First Amendment provides a bulwark against such government control and abuse. But what would it mean if other countries or organizations attempted to coerce the U.S. to preemptively surrender some of those protections in the name of achieving “AI alignment”?
Emerging research offers an interesting insight into these difficulties, revealing that within most countries, there is no agreement on what an “aligned” model looks like and that individuals have strong ideas about which models they prefer to use. If it is not possible to agree on what alignment means even in small communities, imagine the difficulties of doing so across nations with a diverse set of beliefs, traditions, and cultures. For countries that do not follow a worldview based on democratic and liberal values, the likelihood that they would participate in any such international agreement is low. In fact, China recently refused to sign on to a nonbinding blueprint that emerged from an international conference on “Responsible Artificial Intelligence in the Military Domain.”
But even if one could solve the alignment problem, further difficulties would emerge regarding how countries could ensure their adversaries were training AI-based systems that were aligned with whichever international agreement was currently in force. The earliest AI safety institutes ran into this precise problem as companies refused to allow early access to technology that represents huge amounts of investment and intellectual property.
With CBRNE technologies, such as nuclear weapons, there was a clear and well-defined goal: stop their proliferation and unjustified use. These technologies also often required components that were limited in number and hard to obtain, prohibitively expensive, or hard to transport due to their size. In this case, monitoring compliance with international treaties effectively is possible. Training state-of-the-art AI systems requires vast amounts of financial, talent, and computational resources today—making them prohibitively expensive for many countries and companies—but this is changing rapidly. Even if it were possible to limit the development of cutting-edge AI systems to a handful of countries, in-depth systems-level access to many models, datasets, and other proprietary code would likely be required for states to effectively monitor these systems for compliance with international agreements. Due to the competitive nature of AI development and the number of national security and economic interests involved, it is highly unlikely that this access would be granted.
These difficulties were also alluded to in a footnote to a recent 104-page report that articulated a panoply of possible regulatory controls for both AI hardware and software, stating that “there is reason to doubt the feasibility of such regulation, especially if it needs to span multiple, rival geopolitical blocs.”
***
Developing and deploying AI across industries and sectors will be key for any nation that wants to remain relevant in the emerging geopolitical order. Today, the United States has a clear advantage and, as a result, will continue to reap numerous benefits from its early investments in AI innovation. To help counteract this dominance, numerous countries are working together to hamstring the United States’s AI capabilities and ambitions. While dialogue between countries on “AI safety” and other related issues is wise, it would be foolish for the United States to tie its own hands and agree to formally binding constraints on its AI ecosystem.
In many ways, today’s AI safety efforts embody the same idealistic Cold War-era thinking on addressing the dangers of previously developed dual-use technologies. The moral underpinnings may be sound, but in practice, when a nation’s security or prosperity is threatened, they cheat and fail to live up to their legal commitments.
The fact is that global AI safety agreements will never bind illiberal nations, which remain the most prominent threat to human rights, democracy, free expression, the rule of law, and global security. These agreements instead provide authoritarian-minded countries an opportunity to collaborate and actively weaken international norms. This is precisely what the world has seen in recent years as Russia and China have collaborated to develop more authoritarian norms for the internet. If, as Bloomberg columnist Tyler Cowen writes, “China, Russia, and many other rival nations have no such plans” to stop the development of their own advanced AI systems, then the “U.S. has no real choice other than to try to stay ahead of them.”
If the United States agrees to new limits or controls on its emerging AI industry, other nations will take advantage of this to either catch up or potentially even race ahead. This would have long-term disastrous consequences for the national security interests of the United States and must be avoided.
– Keegan McBride, Adam Thierer, Published courtesy of Lawfare.