Generative AI Will Increase Misinformation About Disinformation

Speculation about disinformation by users and the media can generate harmful political effects, amplifying existing biases.

Generative AI Will Increase Misinformation About Disinformation
Illustration of a robot with a speech bubble (Photo: Mohamed Hassan/Pixabay, https://tinyurl.com/3d4phe5z, Free Use)

Much ink has been spilled on the use of generative artificial intelligence (AI) in influence operations and disinformation campaigns. Often the scenarios invoked hang along pretty clean lines: a known state actor, a clear target, a specific goal. The archetypal examples are campaigns like Russia’s Doppelganger or China’s Spamouflage, both of which the U.S. Department of Justice has traced back to specific government-linked entities with clear political aims. 

These cases are the exceptions, however, not the rule. A recent case in Australia—in which unsubstantiated headlines across the country led people to believe a foreign state might be behind a bot campaign—demonstrates that in practice disinformation is often a far messier issue. 

Over several hours on Aug. 19, at least 37 accounts on X posted eerily similar messages in support of Australia’s former Defense Minister Linda Reynolds. Reynolds has been pursuing a long-running and high-profile defamation case against her former staffer Brittany Higgins, who was allegedly raped by a colleague in Reynolds’s office in 2019. The case has attracted an intense online following. 

The accounts had generic handles and nonsense biographies, used women’s photographs as profile images, and often listed their location as being somewhere in the U.S. They tweeted out comments decrying the “heartless and aggressive attacks” on Reynolds, emphasizing the need for safe spaces for women in politics. The posts’ uncanny valley quality and clearly coordinated timing quickly attracted attention—and derision. While impression metrics suggest that original posts were seen by only a handful of people, reposted screenshots by real X users were boosted to hundreds of thousands of viewers. These screenshots were accompanied by user comments speculating that Reynolds had paid for an AI-fueled bot campaign to defend herself. 

This framing of the tweets as a targeted and paid effort by Reynolds (who belongs to the right-wing Liberal Party) was reinforced when a small left-wing media outlet offered a reward of $5,000 to anyone who could prove “who is coordinating and paying for” the tweets. On Aug. 20, the mainstream media picked up the story in print, on radio, and online. In less than 48 hours, the tweets went from being seen by almost no one to being national headline news. Reynolds was forced to publicly deny any knowledge of the accounts; her spokesperson said, “Senator Reynolds was unaware of these bots. The idea that Senator Reynolds would engage with something like this is preposterous.”

Many of the media headlineslead paragraphs, and expert comments contained the suggestive phrases “foreign actor” or “overseas operation,” with some outright stating that a foreign state actor was the “likely” responsible party. In turn, Reynolds’s own lawyer informed the court that his client was potentially being targeted by foreign state actors. Justice Paul Tottle said that the X posts “have the hallmarks of a coordinated pattern of activity that merit the description of an attack.”

To be clear: There is no evidence to support the claim that a foreign state actor was involved with these tweets. There is also no evidence that this was a targeted attack. This was pure speculation, elevated through the media cycle and presented to the Australian public and to the court in a high-stakes defamation case. 

So what were the accounts really doing? 

When I looked into these profiles, what I found was a sprawling, technically unsophisticated operation with little difference from any other commercial bot network, except for its use of generative AI. There was no effort to build convincing or consistent personas for the accounts and no obvious emphasis on particular issues or locations beyond a general geographic focus on the U.S., Australia, and Canada. The same accounts would tweet about multiple countries and adopt multiple personas—for example, posing as a woman from Los Angeles interested in municipal issues in one tweet and as a Canadian speaking out against Pierre Poilievre’s far-right policies the next. They tweeted about an eclectic array of political and nonpolitical topics seemingly tied to the news cycle, suggesting they ingested some kind of news feed and generated comments from it. There was no clear partisan leaning; they posted both for and against both sides of politics across the U.S., Canada, and Australia. There does not appear to be any attempt to sow political division. Instead, the accounts tend to argue for moderate, civil, and policy-based positions, including rejecting misinformation and xenophobia and supporting action on climate change.

Screenshots showing the timeline of a single account in the network (L) and tweets from multiple accounts in the network on Aug. 19-20 (R).

This is the case even for the Reynolds tweets. When viewed in isolation and stripped of the broader political context (which is how the AI would have approached it), they simply express empathy and call for making politics a less toxic space for women. It’s hard to argue with that. It was the human readers who applied a partisan framing, not the AI accounts themselves. 

Many of the accounts followed large numbers of cryptocurrency, marketing, and influencer accounts. It seems very possible that the main purpose of these accounts is simply to sell them to people looking to buy fake followers. The function of the AI-generated tweets may be to make the accounts look real enough to avoid being banned by X’s bot detection systems—although it’s unclear how well that is working, as dozens of the accounts appear to have been taken down in the past couple of weeks. 

In short: It is no more plausible that Reynolds hired these accounts to tweet about her than it is that Justin Trudeau, Kamala Harris, Anthony Albanese, Peter Dutton, or the leadership of Los Angeles’s Animal Services did. Her name and her case just happened to be in the headlines.

State actor operations on social media are a vanishingly miniscule proportion of the overall activity on any platform. Attributing any set of activity to a covert state actor requires a high bar of proof; when you hear hoofbeats, you’d better have good reasons for telling everyone around you that it’s a zebra rather than a horse. 

The types of data available to independent researchers and journalists from social media platforms is limited (and becoming even more so). Where the activity is taking place only on social media, most credible attributions to state actors are made either by the platforms themselves, which have access to a much broader range of internal data, or by law enforcement, which can access that and other data with warrants, as in the case of the recent Doppelganger indictments. Other independent researchers can then refer back to this in their own work, but that key attribution tends to hang on either platforms or government agencies. 

There are some factors that can lend weight to the possibility that activity could plausibly be linked to a covert state actor. Examples include a network of accounts sharing a state’s propaganda in a coordinated manner (for example, Spamouflage accounts, which share the official posts of Chinese diplomats or Chinese state media), or a network engaged in a sustained, large-scale, and resource-intensive campaign with no discernable commercial motive but a clear political agenda, as in the case of Doppelganger. 

None of that is present in this case. Operating this network is well within the capacity of a small group or even a single person. They’re not retweeting or sharing content or amplifying any particular narratives. There is a plausible commercial motive but no consistent or clear political agenda, audience, or focus issue. 

I’ve been arguing for years that we spend far too much time fixating on state-linked disinformation actors and not nearly enough time grappling with the huge impact of the thousands of groups and individuals around the world who generate political disinformation for profit. The same principle applies in the era of generative AI. For all the energy that is (rightly) devoted to the potential ways in which state actors might misuse generative AI for disinformation, the reality is that the vast majority of deceptive AI content will be created by ordinary people looking to make a quick buck. 

The Reynolds case illustrates that the impacts of AI-generated content can be political even when the motive is likely to be entirely commercial. In this case, a handful of tweets from a rudimentary bot network—apparently built to boost cryptocurrency influencers—made national headlines, forced a politician to publicly defend herself, led her lawyer to give a court incorrect information in a high-stakes defamation trial, created an online faction of people who now appear to firmly believe Reynolds hired bot accounts to defend her, and likely left many in the Australian public with a vague impression along the lines of “Linda Reynolds, something something, a bit of dodginess involving bots?”

Meanwhile, it seems likely that the people behind this network barely know who Linda Reynolds is, don’t care about her legal case, and may not even be aware that their network kicked up such a stink in Australia.

This case also demonstrates that AI content doesn’t actually need to be convincing in order to have an effect. The tweets from these accounts created an impact precisely because they weren’t fooling anyone, affirming the preexisting political beliefs of the X users who came across them (in that they were prepared to believe that Reynolds might have hired bots). This became the impetus that drove the initial online engagement and kick-started the traditional media cycle. It wasn’t the beliefs or intentions of the people behind the tweets that made the difference in this case—it was the beliefs of the people who saw them. 

AI is about to significantly increase uncertainty in disinformation research, in an already highly uncertain field. Disinformation operations will be harder to identify, harder to map out, and harder to attribute. People have a tendency to interpret uncertainty in a way that affirms their biases, irrespective of evidence or the lack thereof. Similarly, the strong incentives for journalists and experts to brush over uncertainty and race to beat their competitors will remain the same. No one gets paid to say, “I don’t know” or “this isn’t a story.” There is a real risk that the consequence of these cumulative factors will be increasingly widespread misinformation about disinformation. 

There is no easy way to cut through this Gordian knot of confusion, but there are things that can help. The major amplifier is still the traditional media—even in the age of generative AI, strictly adhering to basic journalistic principles can make a huge difference. This may mean slowing down the reporting process, taking time to fact-check and to push for evidence and—where evidence is scant—being transparent with readers about the lack of proof rather than allowing unfounded assertions to go unchallenged. It is more important to get the story right than to get it first. 

– Elise Thomas is an OSINT Analyst at the Institute for Strategic Dialogue, with a background in researching state-linked information operations, disinformation, conspiracy theories and the online dynamics of political movements. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.