As adversary surveillance capabilities expand, the U.S. national security community faces grave threats. Broader data protections can help.

On June 26, the Department of Justice’s Office of the Inspector General (OIG) published a partially redacted report detailing the FBI’s efforts to mitigate the effects of a seemingly esoteric, yet pressing, threat facing U.S. government personnel: ubiquitous technical surveillance (UTS). The takeaways of the report were not optimistic.
The media quickly picked up the juiciest elements of the document. The Guardian highlighted a story from the report in which a hacker working for the Sinaloa drug cartel obtained the mobile phone number of an FBI assistant legal attaché at the U.S. Embassy in Mexico City, gained access to ingoing and outgoing calls as well as location data, and used Mexico City’s camera system to surveil the official and monitor the people with whom they met. According to the OIG report, the cartel “used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses.”
Susan Landau has already argued in an article for Lawfare that—given the U.S.’s history of using advanced surveillance techniques—it should come as “no surprise that U.S. adversaries had adopted these methods to flush out their foes.” It is worth reading Landau’s article to understand the FBI’s UTS-mitigation deficiencies as described in the OIG report and her perspective on how to address them. This article focuses on a different aspect of the problem: the role that broader U.S. data privacy and security protections could play in reducing the UTS risk-mitigation overhead and better protecting the country.
National security operators on the ground will always have to continually implement, tailor, and upgrade extra mitigations for combating sophisticated, well-resourced, and persistent adversaries who put them under the metaphorical UTS microscope. But in plenty of areas, the risks are catalyzed or exacerbated by widespread industry data and digital practices, including the collection and sale of extensive U.S. persons data; the amount of “digital exhaust” produced during the course of people’s personal lives; and consumers’ lack of comprehensive, effective rights to control how data about them is gathered, used, transferred, and retained. Legislators can pursue broader data privacy and security protections that would directly and secondarily reduce some of the surveillance risks these practices and technologies create for U.S. national security—and lessen the mitigation workload for those on the front lines.
UTS and Risks to U.S. National Security
The FBI defines UTS as “the widespread collection of data and analytic methodologies for the purpose of connecting people to things, events, or locations.” The OIG report stated that “recent advances in commercially available technologies have made it easier than ever for less-sophisticated nations and criminal enterprises to identify and exploit vulnerabilities created by UTS” and that “some within the FBI and partner agencies, such as the Central Intelligence Agency (CIA), have described this threat as ‘existential.’” In other words, smartphones, wearable devices, internet-connected sensors, cloud platforms, data-hungry data brokers and advertising technology companies, nation-state investments in data analysis, and other forces and trends are converging into a surveillance nightmare for many U.S. national security operators. Think of the risks of operating in Beijing—as articulated by former U.S. officials—with widespread CCTV networks that could be run through facial recognition or even gait recognition to potentially identify people and meetups; or with Russia’s expanding biometric surveillance infrastructure, with a direct plugin to the Federal Security Service—participation in which is increasingly mandatory at border crossings and airports.
Other agencies have made similar points in recent years about the continued proliferation of digital devices and technologies, the explosion of data creation and collection, and the degree to which modern platforms and technologies can accelerate surveillance’s speed, scale, and in some cases, automation. Then-CIA Director William Burns said in an April 2022 speech at Georgia Tech that UTS “means that intelligence officers are being watched, tracked, and observed all the time. … [T]his has prompted us to fundamentally rethink how we do our operations.” A September 2022 Government Accountability Office report on the information environment and the Defense Department wrote that because “modern devices, systems, and locations generate, retain, and share enormous volumes of data for broader use,” data such as service members’ online purchases and information collected from Defense Department weapons platforms “can be collected and shared publicly or can be acquired from data brokers,” which poses risks to, or of, force protection, operations security, the safety and security of people’s family members, remote surveillance, and intelligence collection.
According to a recent book chapter on the intelligence threat environment, “[I]n the current digital age, there is little separation between the digital narrative of our professional and personal lives,” and as a result, the act of “managing this narrative well provides the security, protection, and freedom of movement to execute successful operations.” Discussions of how to thwart adversaries’ analysis of UTS-collected data—the idea being, perhaps an adversary can surveil but not use the insights effectively—likewise speaks to the ever-evolving cat-and-mouse game afoot.
As the OIG report detailed with its Mexico example, however, this is not, in fact, a “game”—failures to protect U.S. personnel from pervasive surveillance and mitigate its effects can have serious, life-or-death consequences.
The methodology used by the cartel in 2018, as far as the Justice Department OIG describes it, speaks to the “ubiquitous” nature of UTS and the degree to which agency-specific countermeasures may be insufficient to address the scale of the data privacy and security problem. Notably, the report does not state, as Reuters pointed out, that the hacker in the 2018 Sinaloa drug cartel incident obtained the FBI case agent’s mobile geolocation data through “hacking” per se. Nor did it say that the hacker obtained the data “from” the phone itself. Instead, the hacker “was able to use the ALAT’s mobile phone number to obtain calls made and received, as well as geolocation data, associated with the ALAT’s phone” (emphasis added). Perhaps it was indeed a hack, and the Justice Department OIG used this phrasing to obscure the specific sources and methods used to compromise the device; the hacker could have broken into the mobile device and pulled the call and geolocation data from it in a manner that fits the above verbiage. But the fact that the OIG did not simply say that the device had been hacked opens up the possibility that the hacker “obtained” it from another source, such as a criminal data reseller or a commercial data provider. A location data vendor, for instance, might offer the sort of information the cartel was seeking.
Either way, the fact that this ambiguity could even be argued speaks to the breadth of the UTS problem: There are numerous, pervasive ways to monitor U.S. personnel. The use of a Mexico City camera network to track and kill FBI informants is yet another UTS element.
Looking Beyond One Phone, One Camera Network, and Mexico
In addition to apparent internal failures of training, tradecraft, and mitigation planning, the report underscores the “pervasive” scope of the problem. Foreign-constructed infrastructure is not the only reason that U.S. personnel and their contacts in foreign countries face digital technology-driven surveillance threats. U.S. personnel are forced to navigate gargantuan surveillance webs—in many cases developed by private industry—that create points of connection with, data trails into and out of, and potential hacking entry points across their phones, apps, smart wearables, vehicles, home appliances, laptops, credit cards, connected medical devices, and more. These surveillance webs are often opaque, hard to escape, and indeed pervasive—and because they involve most American consumers, they also entangle many government employees, contractors, and their families and professional contacts.
Government agencies and private contractors can tackle some of these vulnerabilities through piecemeal mitigations, which are certainly better than nothing. On the operational front, for example, researchers discovered in 2018 that wearable device software Strava was publishing the geolocations of users—including those of U.S. military personnel deployed overseas—on the internet. The Defense Department then issued a memo prohibiting department personnel from “using geolocation features and functionality on government and nongovernment-issued devices, applications, and services while in locations designated as operational areas.”
But such efforts do not address the underlying location data tracking mechanisms present in mobile phones, mobile apps, wearable devices, an increasing number of vehicles, and so on. These tracking mechanisms are still typically activated by default, commonly pull in data continuously (as systems are used, even in the background), collect far more data than most people (even perhaps many Defense Department personnel) fully understand, and can do so with minimal legal and regulatory constraints. Efforts like the Defense Department memo—that is, internal agency policy decisions—do not change this last point: They shape how government employees use private-sector-built technology but do not govern the technology companies and corporate technology practices themselves.
These piecemeal efforts still place a tremendous burden on an FBI legal attaché working overseas, a diplomat on a site visit in Europe, or even a U.S.-based defense contractor to try to clean up their digital lives. As the former CIA director and others cited above have noted, it is also laborious for the U.S. government to have to mitigate issues that are catalyzed or exacerbated by underlying, widespread private-sector data practices. Meanwhile, the opportunities that U.S. foreign adversaries have—especially sophisticated and well-resourced ones like Beijing—to continually update their UTS infrastructure, including by potentially applying artificial intelligence (AI) to acquired data, are numerous.
A Wider View on the Problem
Recognizing the degree to which this problem may partly be self-created—the U.S. enabling or encouraging industry-driven data creation, analysis, collection, and transmission across virtually all elements of daily life and the modern technology stack—prompts several immediate questions.
If elements of the U.S. executive branch are rightfully spending time and resources to figure out the most appropriate ways to leverage open-source intelligence (OSINT) for foreign intelligence and other purposes—with the proper democratic privacy and civil liberties guardrails—why are several parts of government (including Congress) evidently so behind in thinking about how adversaries can exploit the data-laden environment against us? If U.S. policymakers talk about U.S. tech sector growth as an important component of American power (which has some truth to it), how is a vision of data as a strategic, security-strengthening resource squared against the many counterintelligence problems accelerated by the proliferation of private-sector digital technologies, data ecosystems, and surveillance systems? When Congress continues to cast aside strong, comprehensive data privacy and security protections—such that too many companies are data-hoarding and too many systems are data-disseminating—what opportunities is that creating for adversaries, and what burden is it placing on national security operators?
The answers are numerous and complex. Perhaps, for instance, some policymakers and operators are accustomed to thinking more about how the United States might exploit various technologies than about how foreign adversaries may do the same.
But one persistent problem is the mismatch between the policymakers with the authority to act on national security and those with the authority to act on privacy. In Congress, committees such as the House Committee on Energy and Commerce have jurisdiction to write bills focused on broader consumer privacy issues that would encompass UTS-related commercial surveillance technologies, but the committee is neither principally focused on, nor the most deeply steeped in, national security issues. The House and Senate armed services and intelligence committees, by contrast, ostensibly have a much deeper understanding of the national security threat space and possibly how UTS-related issues fit into the picture—but they are not the ones that would be writing broader data security and privacy laws to curtail more problematic, pervasive practices. Experts at the Defense Department or CIA, to give another example, might understand UTS threats better than most others, but they do not have the authority to write laws. Likewise, any changes they might push to their acquisition regulations could importantly shift how Defense Department or CIA suppliers design technologies yet cannot regulate technology in the American private sector writ large.
To be very clear, new U.S. laws and regulations on data and technology are not going to mitigate every UTS-associated issue in adversary countries. For instance, a comprehensive U.S. privacy law is not going to prevent the Chinese government from setting up increasingly interconnected networks of surveillance cameras in major Chinese cities and around the country to monitor people’s movements. Nor are U.S. data security measures for advertisers, AI companies, and other private-sector players going to prevent hypothetical hackers in St. Petersburg or Tehran from attempting to tunnel into U.S. diplomats’ mobile devices, or from digitally tailing people as they drive connected vehicles around foreign countries.
But more structured legal and regulatory approaches in the United States could certainly better mitigate the challenges of the UTS environment that U.S. personnel face. Comprehensive privacy laws could curtail the degree to which U.S. smartphones—and, increasingly, devices like wearables and software applications such as AI chatbots—leak a steady stream of data. Federal, cross-sector data security regulations (even if varied based on sector and use case) could lead to better industry-implemented controls over ubiquitous digital processes like real-time bidding (RTB)—the algorithmic auction process by which advertisers reach the users of online services and digital devices, including those carrying smartphones.
Right now, RTB processes can propagate tremendous amounts of data that foreign adversaries could potentially exploit—and which would be most effectively governed by standard, minimum privacy and security requirements. And at a base level, even cybersecurity requirements could help shore up the threat environment and limit opportunities to hack into U.S. devices. These security requirements could address anything from the widely known problems with the Signaling Systems 7 (SS7) telecommunications protocol to insecure-by-design connected devices that create UTS vulnerabilities in digital supply chains.
These more structured legal and regulatory approaches could bolster the defenses on the U.S. side, such as by lowering the likelihood that certain devices and systems are problematically leaking or transmitting sensitive data by design. They could also raise costs on the adversary’s side, such as by compelling a threat actor to expend more resources, spend more time, or burn more advanced capabilities to carry out UTS against a target—rather than simply buying data or setting up an RTB account to grab it.
The U.S. national security operators on the ground in foreign countries will always have to take on some degree of additional, specialized work to mitigate UTS-related risks—be they the risks of sophisticated, well-resourced, and persistent nation-states like Beijing or Moscow penetrating mobile devices; or, as in the case of the Justice Department OIG report, a drug cartel getting data from a phone through opaque means and using a camera surveillance network to identify and kill U.S. informants. Yet wherever the United States can reduce that burden through better federal privacy and security legislation for digital technologies and data—improving national security protections across the board and protecting Americans writ large in the process—it should take immediate steps to do so.
– Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; the scholar in residence at the Electronic Privacy Information Center; and a nonresident senior fellow at the Atlantic Council. Published courtesy of Lawfare.