Could AI Lead to the Escalation of Conflict? PRC Scholars Think So

Chinese defense experts worry that AI will make it more difficult for Beijing to control and benefit from military crises.

Could AI Lead to the Escalation of Conflict? PRC Scholars Think So
Members of China’s People’s Liberation Army (PLA) walk past the Tiananmen Gate in Beijing, China. (Photo: Tomohiro Ohsumi/Bloomberg/Times Asi/Flickr, https://tinyurl.com/yw43w3bx, CC BY 2.0)

In the years since Xi Jinping assumed power, China has undertaken increasingly risky military activities in the name of national security at sea, in the air, and even in space. At the same time, Beijing has exhibited a shrinking appetite for responsibly managing military crises with Washington; Chinese officials often decline to use bilateral communication mechanisms amid crises, severing those channels in the wake of Nancy Pelosi’s 2022 Taiwan visit and neglecting to answer U.S. Secretary of Defense Lloyd Austin’s calls during the 2023 spy balloon incident.

Through undertaking risky maneuvers in proximity to U.S. forces and electing not to use escalation management mechanisms, China hopes “to coerce a change in lawful U.S. operational activity,” according to senior U.S. officials. While the People’s Liberation Army (PLA) and U.S. military reopened lines of communication late last year, experts worry that bilateral crisis communication channels will be insufficient to limit escalation risks arising from future incidents.

According to Chinese military theory, Beijing believes that it can benefit from crises through “seiz[ing] opportunities” that they create. As some U.S. experts have argued, Beijing thinks that military crises can open up strategic chances to “gain the upper hand over its competitors.” Moreover, in escalatory situations, realizing political goals appears to take precedence over minimizing the risk of clashes. Indeed, official PLA guidance from 2013 posits that measured escalation can even be useful in obtaining political objectives.

But as artificial intelligence (AI) increasingly plays a part in defense, a detailed review of over 50 Chinese academic articles on AI’s impact on the future of warfare reveals that many Chinese defense experts are concerned that the technology could lead to rapid, uncontrollable, and potentially catastrophic conventional—and even nuclear—escalations.

***

Beijing’s belief in its ability to gain from crises is supported by a confidence in its capacity to skillfully manage them. The PLA has spilled much ink on the importance of controlling escalation, as evidenced by the topic’s prominence in publications like the 2020 edition of the Science of Military Strategy, an official PLA document aimed at guiding military thinking and doctrine.

While Chinese views of what it terms “war situation control” have become more cautious over the past few years, PRC strategists appear “optimistic” that Beijing can use strategic planning and technology to control the trajectory of “informationized local wars.” Other Chinese theorists have claimed that Beijing is adept at managing the escalation of conventional military crises. Interestingly, however, Chinese military theorists are less sanguine about Beijing’s ability to control nuclear escalations.

To be sure, while other U.S. studies point out that China’s thinking on escalation management remains somewhat underdeveloped, the integration of AI into defense systems may be changing Chinese strategists’ belief in Beijing’s ability to control escalation and benefit from crises.

Understanding how Chinese decision-makers view the relationship between AI and escalation will be crucial for predicting and responding to China’s behavior during future military crises. Moreover, Chinese experts’ concerns about the nexus of AI and escalation could provide an opening for bilateral discussions on how to responsibly manage the risks of using AI in certain military contexts.

The vast majority of these scholars, most of whom are affiliated with the PLA or organizations in China’s defense industrial base, worry that the quickening pace of AI-enabled military operations, as well as the potential delegation of decision-making to machines, could cause conflicts to spiral out of control. Moreover, the growing complexity of AI systems—and the difficulty of ensuring their explainability and reliability—could make them unpredictable, thus increasing the risk of miscalculations. The experts also express concerns over losing control of autonomous systems whose actions could feed escalatory dynamics, and they argue that the use of AI for cyber offense and defense could cause conflicts in cyberspace to lead to dangerous escalations.

While the scholars note that AI’s prevalence on contemporary and future battlefields could lead to the escalation of conventional conflicts, they are also concerned that AI might make it more likely that conventional clashes cross the nuclear threshold. Almost every paper I reviewed that touched on AI’s role in escalation dynamics argued that the technology will make it more difficult to control dangerous spirals.

Experts at the PLA-affiliated National University of Defense Technology, for instance, note that “the large-scale military application of artificial intelligence [will] further increase the uncertainty and uncontrollability of crisis outbreaks and escalations,” potentially leading to the eruption of wars. Scholars affiliated with the PLA Air Force concur, arguing that AI systems will “aggravate the uncontrollable degree” of crises.

Because AI systems continue to be plagued by explainability gaps, they may exhibit “unexpected behavior” that gives rise to accidents, perhaps “weaken[ing] the reliability of nuclear command, control and communication systems … increase[ing] the risk of accidental nuclear conflict.” Some experts note that the race to develop AI-enabled military systems could lead some countries to deploy them before they are properly tested and evaluated. Their malfunction, the authors write, could bring about “uncontrolled conflict escalation.”

Other PLA-affiliated experts note that the use and delegation of decision-making to lethal autonomous weapons systems on the battlefield “may lead to escalation of conflicts [that] threaten strategic stability.” In the same vein, others note that autonomous systems could take actions that push combatants toward “infinite escalation” amid crises.

While none of this academic literature should be considered authoritative PLA guidance on the escalation risks associated with AI-enabled military systems, the articles do reveal that a subset of Chinese defense experts believes that emerging technologies will significantly impact escalation dynamics in the not-too-distant future.

Most important, these defense scholars appear to disagree with extant Chinese military crisis management theory, the vast majority of which was conceived of before the advent of AI, which states that China can not only control military escalation but also strategically benefit from such dynamics. Indeed, they note that conventional conflicts will be more likely to spin out of control and that nuclear clashes will become more probable as militaries integrate AI into their systems, thus making it more difficult for China to responsibly manage and gain strategic advantages during crises.

Despite Chinese scholars’ growing concern about AI’s potential to accelerate escalations, there is no indication that Beijing will soon change its approach to or official guidance on managing military crises. But as China continues to make increasingly risky maneuvers in proximity to U.S. and third-country forces in multiple domains, it is ever more important that U.S. and Chinese officials discuss the growing danger that AI and autonomy will pose before the next military crisis arises.

The apparent thaw in mil-to-mil engagement presents an opportunity for Washington to engage Beijing in discussions aimed at mitigating the escalatory risks that AI-enabled military systems present for crisis management and escalation control. The fact that some Chinese scholars recognize the growing risks that AI and autonomy pose should bolster U.S. officials’ case that the two sides must figure out how best to avoid incidents that could lead to unwanted and uncontrollable escalations.

In future editions of the official U.S.-China AI dialogue, focus on the growing risk that emerging military technologies pose will be crucial; unofficial gatherings of U.S. and Chinese military AI and nuclear experts are also useful forums for raising concerns. Discussing nuclear and military AI-related subjects with Beijing is seldom easy and often frustrating, but U.S. policymakers can engage Chinese officials in an effort to understand their views of AI’s impact on conflict escalation and crisis management, as well as whether those officials’ views align with those of the scholars discussed above. These dialogues would also be useful venues through which to reinforce the fact that uncontrollable escalations are not in Beijing’s interest.

Though it may not be possible to strike binding agreements limiting AI’s use in certain military contexts in the short term, exchanges of information through AI and mil-to-mil dialogues on these topics could lead to cooperation down the road on mutually beneficial actions. At the very least, they could allow each side to begin to understand the other’s views of AI-related escalation risks. Furthermore, discussing the establishment of norms of AI usage amid crises could help arrest future escalatory spirals.

Given China’s ongoing nuclear buildup, intensifying tensions over Taiwan, and the competition to develop AI and related emerging technologies, such discussions are more important than ever.

– Sam Bresnick is a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), focused on AI applications and Chinese technology policy. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.