
A new paper from the Germany-based think tank Interface has attempted to define the threshold at which peacetime state cyber operations become irresponsible.
The author thinks that more concrete definitions of responsible behavior would help guide states and prevent dangerous conduct.
It’s a commendable effort, but we don’t think the architects of cyber operations really care about norms, and a German think tank writing down its preferred rules on a piece of paper won’t make any difference to state behavior.
Governments do, however, care about potential political costs and the risk of retaliation. One of the paper’s goals is to provide a framework that makes it easier for victim states to flag irresponsible operations and respond appropriately.
The paper defines seven principles-based “red flags” and gives examples of some real-world cyber operations that might have raised these flags.
The first red flag, “causing physical harm, injury or death” is pretty straightforward. It’s a threshold that states have observed, and the paper does not list any cyber operations that it thinks have crossed the line.
The most interesting red flag is “lacking or losing operational control.” The author argues that maintaining effective operational control “is essential,” because risks increase when operations spiral out of control.
This can take two forms. One form is “technical loss of control” such as in the cases of NotPetya, WannaCry, or even Stuxnet. At first glance, states seemed to have learned their lesson, and there hasn’t been another NotPetya-style disaster since the original was unleashed in 2017.
The paper points out that AI “vibe coding” could make loss of control a problem again. Loosey-goosey software development risks introducing unpredictable behaviors. If operators don’t even understand how their malware works, things could go wrong.
The second form is what the paper calls “organizational loss of control.” This part of the paper takes aim at China’s loosely controlled contractor ecosystems, and the examples cited include i-Soon and other contractors, and the mass exploitation of Microsoft Exchange.
This is a part of the report that could get some traction with policymakers. Governments want to make hay with cyber operations, but they don’t want to accidentally cause some sort of drastic escalation because a contractor got excited.
The other five red flags are less likely to move the needle. The internal logic of why they are red flags makes sense, but some are already fairly common or there are practical reasons they are difficult to deter.
For example, “intervening in domestic political processes” being listed as a red flag makes sense. Internal political processes are fundamental to how a state functions. But interference is actually relatively commonplace, and we’ve yet to see a strong response. The paper cites direct interference in Ukrainian election architecture and hack-and-leak operations to influence the U.S. and French presidential elections as examples of this type of interference.
The French response to election interference in 2017 was tactically very effective, in that Russian interference was neutered, but in general, responses have not been painful enough to deter adversaries.
At least in part, that is because it can be practically difficult to respond robustly. During the 2016 U.S. presidential election, for example, a domestic constituency benefited from interference and did not want to acknowledge that it had even occurred.
So although the underlying logic of labeling interference in domestic political processes as a red flag makes sense, there are practical reasons why it has historically been difficult to enforce. And we don’t see these reasons disappearing anytime soon.
“Triggering physical disruption or destruction” is listed as another red flag, with the paper citing the interruption of Ukraine’s electricity network, Stuxnet, and the disruption of a German steel mill as examples. If we were writing the report, we’d add the Predatory Sparrow incidents in Iran to the pile.
Most of the destructive incidents we mentioned above are examples of stronger, more capable states punching down on relative minnows. It’s the kind of things bigger states do when they think they can get away with it. An aggressor state might even argue that these destructive cyber operations are a good thing because they replace more destructive and escalatory kinetic attacks.
Two of the other red flags fall into the category of mostly-observed-but-we’ll-do-it-when -we-can-get-away-with-it operations. These are “prepositioning for civilian disruption” and “preparing the military battleground.”
The best example the paper cites here is Volt Typhoon, the Chinese government’s effort to compromise U.S. critical infrastructure. That example highlights the problem, though. The U.S. absolutely does not want China’s hackers rummaging around through its critical infrastructure getting up to no good. But what can it do? The U.S. is already engaged in an on-again off-again trade war involving tariffs, critical minerals, and artificial intelligence (AI) technology transfer. Concerns about Volt Typhoon are lost in the noise.
The paper also briefly describes the “toolbox” of options that policymakers can use to respond. This isn’t the paper’s focus, but it suggests “military posturing or operations” as an option.
The paper presents a framework to decide when cyber operations cross important thresholds that are worth responding to. As U.S. policymakers are thinking about legislation aimed at deterring foreign cyber adversaries, this work could be useful.
