The U.S. and China Need an AI Incidents Hotline

Ironically, the two countries can look to the past, not the future, for inspiration on how to mitigate AI-related risk.

AI Safety Summit, 2023 (Picture by Simon Walker / No 10 Downing Street, https://www.flickr.com/photos/number10gov/53305042124/in/photostream/,CC BY-NC-ND 2.0 DEED, https://creativecommons.org/licenses/by-nc-nd/2.0/)

It’s been a busy month for artificial intelligence (AI) governance. The second AI safety summit convened world leaders in Seoul the week of May 20 to discuss the risks of AI. The week before, the United States and China met in Geneva to discuss potential avenues for cooperation on AI safety. The occasion marked the first of the bilateral meetings that Presidents Joe Biden and Xi Jinping agreed to at the 2023 Asia-Pacific Economic Cooperation (APEC) Summit, where Biden promised to convene experts to discuss AI risk and safety, and emphasized to Xi the importance of avoiding miscommunications, saying that “it’s paramount that you and I understand each other clearly, leader-to-leader, with no misconceptions or miscommunications.” Xi, too, highlighted the need for deconfliction: “How to steer the giant ship of China-U.S. relations clear of hidden rocks and shoals, navigate it through storms and waves without getting disoriented, losing speed or even having a collision?” 

Opening dialogue between the two rivals is significant, but given the rapid rate of progress of frontier AI research (including algorithmic progress and other quantifiable trends), diplomats will need to be quick to come up with risk-mitigation measures if they want to be more than a talk shop. Moreover, these measures will need to be “future proof,” meaning they can’t focus solely on current capabilities that will remain relevant for no more than a few months; risk mitigation needs to be flexible to respond to ever-more capable models

Ironically, the two countries can look to the past, not the future, for inspiration. Taking a page from Cold War cooperation, the two countries can look to the history of U.S.-Soviet confidence-building measures.The 1963 Hotline Agreement and the 1971 Accidents Measures Agreement could serve as models to bolster AI safety and security without immediately touching on delicate topics such as export controls on advanced chips. 

An “AI hotline” for both civilian and military AI systems (or an explicit agreement to discuss AI incidents on existing hotlines) could help prevent unintended escalation as well as protect against accidents and emergent effects from civilian frontier models. And more broadly, it could help to bolster a shared culture of hotline use to improve crisis communications between the two countries—another goal of last year’s summit.

***

The U.S. and USSR signed the 1963 U.S.-Soviet hotline agreement in the aftermath of the Cuban missile crisis, when both countries came dangerously close to nuclear war and realized that traditional means of diplomatic communication were too slow for the atomic age. Russian diplomats in Washington, for example, delivered messages via telegraph runner; Soviet Ambassador Anatoly Dobrynin later recalled, “We at the embassy could only pray that he would take it to the Western Union office without delay and not stop to chat on the way with some girl.” The hotline helped to fix this problem, establishing the Direct Communications Link (DCL) that leaders used in attempts to deescalate during the Six Day War, the Russian invasion of Afghanistan, and other Cold War crises. 

We can’t afford to wait for the AI equivalent of the Cuban missile crisis. Much like nuclear technology, AI is promising but accident-prone, and could have unintended effects on strategic stability. For example, the integration of AI in military systems could quicken the pace of war, enabling what Chinese experts call “battlefield singularity” and Western experts call “hyperwar”—as war accelerates to machine speed, events on the battlefield may slip out of human control. In addition to these emergent effects, accidents happen; even with rigorous testing and evaluation—which may fall by the wayside as competition heats up—autonomous systems may encounter situations that cause them to behave in unexpected ways, increasing the risks of miscalculation and unintended escalation. These issues are not limited to the battlefield, however. At the cutting edge of civilian AI development—so-called frontier models—the risks may also be catastrophic. As leading AI scientists participating in the International Dialogues on AI Safety recently put it in their 2024 Beijing consensus statement, “Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes.” 

Many risk mitigation measures will rightfully focus on prevention. As part of an effective layered defense, which builds multiple layers of defenses in case the first layer fails, prevention must also be coupled with mechanisms to respond to and contain incidents if they do occur. (It may be a question of when, not if, as the thousands of entries in the AI Incident Database illustrate.)

Crisis communications tools like hotlines can help leaders manage incidents quickly and at the highest levels. Consider, for example, a near-future scenario in which one of the thousands of autonomous systems that the U.S. intends to deploy malfunctions or loses communications with the rest of its swarm while in the South China Sea; the U.S. military may wish to let the People’s Liberation Army (PLA) know that whatever this system is doing, it is unintended. Or consider a scenario in which China learns that a third state’s leading civilian AI lab has trained highly advanced AI, but that this system has started behaving strangely recently, accumulating resources on the world’s stock markets and attempting to gain access to systems linked with critical infrastructure. In this situation, China may wish to call on a hotline, and the two countries may have only minutes to work together to avert a catastrophe. 

An AI hotline could be a dedicated channel, but it could also build on existing crisis communications between the U.S. and China. The countries have two main crisis communications channels that are publicly known. The Beijing-Washington hotline was established in 1998, and the Direct Telephone Link (DTL) was added in 2008 for military-to-military communications. Assessing hotline effectiveness is challenging—most crisis communications are classified, and we cannot rerun history to compare our world with the counterfactual world without hotlines—but there are some publicly known cases where hotlines appear to have helped deescalate tensions and avoid misunderstandings. The DTL was reportedly used, for example, to reassure China during the 2020 war scare, when U.S. defense officials learned that the PLA was genuinely worried about a possible “October surprise” attack and Chairman of the Joint Chiefs of Staff Gen. Mark Milley allegedly told his counterpart that “[i]f we’re going to attack, I’m going to call you ahead of time. It’s not going to be a surprise.”

Like the 1971 Accidents Measures Agreement, which highlighted the importance of communicating about nuclear weapons accidents via the hotline, the U.S. and China could sign an AI Incidents Measures Agreement today. The 1971 agreement also encompassed warnings of planned missile launches. Similarly, an AI incidents agreement could include clauses to notify the other country of major AI training runs or the deployment of new AI-enabled military systems, just as private companies now have to warn the U.S. government thanks to last year’s executive order on AI

As a subset of AI-focused confidence-building measures, an AI hotline may be politically tractable. Experts on both sides of the U.S.-China rivalry have expressed support for similar confidence-building measures. In the U.S., the National Security Commission on AI suggested an International Autonomous Incidents Agreement modeled on the 1972 Incidents at Sea Agreement to mitigate risks from autonomous systems. A recent Center for a New American Security study developed this idea in greater detail. (Full disclosure: The Global Catastrophic Risks Fund, which I manage, provided the funding for this work.) Similarly, Zhou Bo, a senior colonel in the PLA, pointed to the success of confidence-building measures like the 1972 agreement as a model for managing the risks of U.S.-China competition.

Though politically tractable, establishing an AI hotline won’t be easy. Expressing a willingness to do something is just the first step to implementation. Then, after overcoming obstacles to implementation, simply establishing the hotline won’t be enough—both parties will actually have to use it. 

The Chinese party-state has a history of simply ignoring crisis communications tools even during crises like last year’s spy balloon incident. This is partly why the Biden administration has focused on crisis communications as another area of cooperation with Xi’s China. Yet this may be an even stronger reason to push for greater attention to hotlines and to build the shared culture of crisis communications that the Cuban missile crisis built for the U.S. and the Soviet Union. 

– Christian Ruhl is a senior researcher at Founders Pledge and manages the Global Catastrophic Risks Fund. The views expressed are the author’s personal opinions. Published courtesy of Lawfare.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.