The Right to Reality

AI-generated content might cause the marketplace of ideas to fail. Recognition of the right to reality might safeguard space for democratic deliberation.
“Artificial Intelligence.”

New technologies pose new risks that require new rights. The right to privacy emerged when the camera made private affairs public. The right to be forgotten took root when data shared online for a specific purpose for a finite time became a permanent part of social history. Now, with the spread and evolution of artificial intelligence (AI), there is a need for a right to reality—broadly, a right to unaltered or “organic” content.

Overview of the Right to Reality

My current conception of the right envisions an obligation on social media platforms and other marketplaces of ideas to categorize content according to the following classes:

  • Class 1: Content would have no or negligible risks of having been created, altered, or informed (that is, based on AI-led research) by AI tools. This class would constitute “organic” content, which is written by humans based on research conducted by humans.

  • Class 2: Content would have no risk or negligible risk of having been created or altered by AI tools but may be informed by research conducted with AI tools. Content in this class could also be known as “tainted” content.

  • Class 3: Content would have moderate risk of having been created or altered by AI tools.

  • Class 4: Content would have a high risk of having been created or altered by AI tools. Both classes 3 and 4 would qualify as “artificial” content.

If adopted, these classes, like nutrition labels, could appear in a standardized location and in an easily understandable manner. As a result, users, readers, and observers could use these classifications to quickly and clearly determine the extent to which AI contributed to a post or a news story. In my projected ideal view of this novel classification system, platforms, newspapers, and other content distributors would eventually allow the public to restrict their content consumption to specific classes of their choosing. With such options in place, the public could vote with their feeds to encourage the creation of more content of a specific class. For the sake of illustration, if a platform tried to distinguish itself by prioritizing Class 1 and 2 content and users flock to that site, then they would send a strong signal to other platforms and content creators to move away from artificial content.

At this early stage in AI development, though, the right to reality need not be codified or ratified—for now, it’s most important that this right take hold in the public’s imagination. In due time, once platforms, their users, and the public have experienced the informal application of the right and gotten a sense of its proper use as well as some potential limitations and exceptions, this important protection can and should receive legal recognition. Simply naming a right can go a long way toward realizing it.

This approach has worked in similar contexts. Consider, for example, the evolution of the right to repair. The Repair Association, a leading advocate of the ability of consumers to use products, modify them, and repair them “whenever, wherever, and however [they] want,” traces origin of the right back to the Consumer Bill of Rights proposed by President John F. Kennedy in 1962.

About a decade later, the U.S. Consumer Product Safety Commission was formed. In turn, the once-theoretical Consumer Bill of Rights became the basis for important regulatory activities, such as the establishment of performance standards for products and guidance as to how and when to affix warning labels to products.

Another 20 years passed until the U.S. Supreme Court issued a ruling related to the right to repair. In Eastman Kodak Co. v. Image Technical Services, Inc., the Court concluded that a company with “appreciable economic power” in a specific market could not sell their product on the condition that the buyer also obtains a different or “tied” product.

And, shortly after the turn of the century, the idea of the right to repair worked its way into formal legislation in the form of the Motor Vehicle Owner’s Right to Repair Act. Since then, groups such as the Repair Association have acted on public awareness of and support for the right to repair, and have been successful in passing legislation in dozens of states.

The decades-long history of the right to repair, though, needs to be expedited with respect to the right to reality, given the rapid and unpredictable developments in the capacity of AI models and the technology’s unchecked integration into fundamental aspects of day-to-day life. To aid in that acceleration, observers should understand a potential legal justification for the right, as well as an explanation of the policy rationale for it, as detailed below.

A Legal Basis for a Right to Reality

Enforcement of a right to reality on the main information exchanges in the modern era—namely, social media platforms—raises First Amendment issues.

The proper interpretation and enforcement of the First Amendment has fluctuated over time based on the common interest of the public. U.S. Supreme Court Justice Anthony Kennedy endorsed this fluid and evolving conception of the right in his concurrence in Denver Area v. FCC. He asserted that “[w]hen confronted with a threat to free speech in the context of an emerging technology, we ought to have the discipline to analyze the case by reference to existing elaborations of constant First Amendment principles.” The Court has specified that one such principle is the preservation of “an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee.”

A new free speech paradigm must emerge to confront the dis- and misinformation that may soon dominate papers, social media platforms, and the information ecosystem generally—and the right to reality should be a core part of this modern conception of the freedom of speech. Absent this right, marketplaces of ideas (social media platforms and other public forums of speech) will most likely be littered with artificial content that directly and significantly diminishes the likelihood that “truth will ultimately prevail.” This market failure may come sooner rather than later: According to the Europol Information Lab,  upwards of 90 percent of online content may be generated by AI by 2026. The right to reality aids in the increasingly difficult task of distinguishing organic from tainted from artificial content. Yet this right will likely not emerge (at least initially) under federal law.

Though the U.S. Supreme Court has labeled cyberspace as one of the “most important places … for the exchange of views[,]” it has yet to formally declare social media platforms as public forums as defined by First Amendment jurisprudence. (It is worth noting, though, that that question is partially before the justices this term.) Additionally, the Court has not defined platforms as state actors, and thus it follows that the right to reality may not find a legal home in the U.S. Constitution. Here, again, the right to repair model serves as a useful guide—as state governments, rather than the federal government, may be the best place to introduce and enact rights to reality.

Many state constitutions set forth broader freedom of speech and expression protections than the U.S. Constitution. This is especially true with respect to what constitutes state action and which entities may qualify as state actors. For example, the Pennsylvania Constitution states that “[t]he free communication of thought and opinions is one of the invaluable rights of man, and every citizen may freely speak, write and print on any subject, being responsible for the abuse of that liberty.” Missing from this broad protection of speech is any requirement that only government action can qualify as a violation of that right. Consequently, the Pennsylvania Supreme Court has interpreted that affirmative right to permit the state to reasonably restrict the right to possess and use property in the interests of freedom of speech, assembly, and petition.

Commonwealth v. Tate exemplifies this broader protection. In this case, the Pennsylvania Supreme Court found that the rights of expression held by students protesting on the campus of a private college deserved greater constitutional protection than the right of the college to use its property. Similarly, the California Supreme Court, interpreting comparable language, recognized a limited right to freedom of expression under the state constitution where a group of students collected signatures for a petition at a private mall.

The New Jersey Supreme Court likewise interpreted the state constitution’s applicable speech provision to afford a limited private right of action upon the demonstration of some public use. More specifically, the court held that the constitution protected the right of citizens to distribute leaflets at a private university. This conclusion turned on the justices’ broad conception of the role of the state with respect to fundamental rights; unlike the federal government, the justices concluded, the state had an affirmative duty to protect such rights.

This analysis, though, should not be read to suggest that even the state courts interpreting the broadest freedom of speech and expression provisions will necessarily conclude that citizens have a private right of action against social media platforms that fail to maintain a functioning marketplace of ideas. State action jurisprudence is notoriously unsettled and unpredictable. Scholars have detected as many as seven different tests—including the public function test, the joint action test, the nexus test, and the symbiotic relationship test—that courts use to resolve state action inquiries. These tests serve a common purpose: bolstering individual liberty by preventing individuals from being forced to adhere to constitutional requirements. In other words, the doctrine aims to prevent the constitution from overriding individual liberty as well as the freedom of individuals to make certain choices. How best to balance individual liberty and equality clearly is an open and difficult question.

As more public discourse moves online, however, it’s likely that a state court will develop a new, clearer test that I call the “induction” test. Under this test, if the government uses a private space to induce public participation, then that private actor must qualify as a state actor. This test upholds the underlying goals of the doctrine because private actors remain free to decide the extent to which government officials and offices use their property. The largest social media platforms, for example, would clearly satisfy this test as most members of government, and various government agencies, are active on social media. As of 2020, the Pew Research Center noted 1,362 congressional Twitter accounts and 1,388 congressional Facebook accounts. These accounts were used to send 3.3 million tweets and make 1.5 million Facebook posts over a six-month period. State offices and state legislators also have turned to social media to connect with the public. One state senator from Arizona, for example, made nearly 21,000 tweets or retweets in the course of just eight months. Generally, messages from government offices and elected officials on social media most often contain important information about bills, hearings, and public opportunities. The upshot is that the government induces the public’s continued engagement with social media platforms.

A state court interpreting state law may be more likely to introduce this new test than a federal court because some state courts have already more broadly defined state action. In such a case, the state court would have no obligation to adhere to federal law nor U.S. Supreme Court precedent. The Washington Supreme Court, for one, noted that the state’s constitution omits an explicit state action requirement; as a result, the court has determined that, when analyzing state law, it is “[f]reed from those [state action] restraints,” thereby enabling the court to be “more sensitive to the speech and property interests in each case[.]” Another reason why a state court may be more likely to introduce this test than a federal court is because many state constitutions impose an affirmative obligation on the state to act to protect fundamental individual rights. This duty implies that the state must act to prevent infringements of any such rights by public and private actors alike. By way of example, that’s how the New Jersey Supreme Court has interpreted the state’s affirmative right to free speech. As summarized by David Howard, the court’s case law demonstrates that the New Jersey Constitution “not only protects free speech from abridgement by the government, but also protects again unreasonably restrict or oppressive conduct in certain circumstances.” It stands to reason that the court may find a private actor’s conduct unconstitutional regardless of whether that conduct aligns with one of the preexisting state action tests.

Moreover, the legal basis for a right to reality under state law is reinforced by its limited impact on the quantity and content of speech. The right to reality would not require the removal of any content nor discrimination based on the viewpoints asserted by that content. Instead, the right is best categorized as a disclosure requirement. This distinction is legally significant. “Compelled disclosure of information to enhance market efficiency or public safety,” as summarized by law professor Tabatha Abu El-Haj, is “presumptively constitutional[.]” Abu El-Haj’s analysis of case law suggests this is because “information regulation—whether in the form of a prohibition on false and misleading information or a demand for truthful disclosure—is a critical regulatory tool.” This sort of information regulation is especially necessary in the context of AI-altered content, as demonstrated by the policy rationale detailed below.

The Policy Basis for a Right to Reality 

Assuming that modern trends continue, the role of social media platforms as marketplaces of ideas will only grow in years to come. And, yet, these marketplaces may soon be flooded with AI-altered content that users cannot distinguish from “organic” content.

As mentioned above, in as soon as three years, nearly all online content may be generated by AI. This wouldn’t pose a problem if users could easily spot artificial content, yet an early study suggests that artificial content increasingly passes for organic content. Perhaps more troublingly, that same study found that efforts to aid users in detection of artificial content often backfired, with some intervention even causing increased skepticism about the nature of specific content—therefore increasing the distrust and uncertainty that may characterize platforms filled with artificial content.

Public concern about artificial content also justifies a policy response. In a recent poll conducted by the Associated Press, approximately 60 percent of adults agreed that AI tools will accelerate and augment the spread of false information.

Some elected officials appear to have caught on to the need for a right to reality to address these issues. Their tepid support, though, likely needs to transform into a full endorsement if this right has any chance of being enforced in the relatively near future. California State Sen. Scott Wiener, for example, acknowledged that “[t]here are potential risks around misinformation and election interference” arising from AI. Despite those risks, however, Wiener held off from calling for specific reforms to mitigate such pollution of our information ecosystem because “the last thing we want to do is stifle innovation.”

In the aftermath of the 2020 election, the Jan. 6, 2021, attack on the U.S. Capital, the perpetual misinformation surrounding the “big lie,” and with the 2024 presidential election around the corner, it is imperative that  policymakers prioritize Americans’ ability to access verifiable and accurate information about the candidates and the election itself. This access will ensure that voters are not duped by deepfake candidates, such as “AI Yoon,” for example. In the lead-up to the South Korean presidential election in 2022, one of the candidates, Yoon Suk Yeol created an avatar of himself to “meet” with voters—“[w]ith neatly-combed black hair and a smart suit, the avatar look[ed] near-identical to the real … candidate but use[d] salty language and meme-ready quips in a bid to engage younger voters[.]” Though AI Yoon may have been well intentioned, it may hasten the creation of avatars, chatbots, and the like that make it difficult for voters to distinguish the “real” candidate from their artificial clone.

Once steps have been taken to achieve this goal, then artificial intelligence innovation surely deserves some attention. Absent some safeguards around when, where, and to what ends AI may be used, AI advances may destabilize more than just the marketplace of ideas. Policymakers’ regulatory focus should be on mitigating AI risk rather than advancing an already poorly understood technology.

Some observers may contest the feasibility of the right to reality. Admittedly, how best to technically realize the right is (way) beyond my pay grade. That said, scholars such as Lawrence Lessig anticipate that tools to determine the provenance of information will become available sooner rather than later. To boot, if enacted or even if popularized, the right to reality would likely expedite the creation of such tools by virtue of creating a market for their use—media companies would have to line up to procure them and implement them as soon as possible.

Importantly, this sort of labeling has worked to mitigate a similar challenge. The Prosocial Design Network has observed that “[l]abels like ‘get the latest,’ ‘stay informed,’ or ‘misleading’ and such” can reduce the likelihood of social media users sharing content that may contain easily misinterpreted information.

On the whole, the sooner tools are developed to categorize content per the classes outlined above, the better. Given that AI labs have shown no signs of slowing down their development and deployment of ever more advanced AI models, it seems unlikely that the creation of watermarking tools and other means to detect AI-altered content will become any easier in the future.

***

Now is the time for a right to reality. As forecasted by Melissa Heikkilä in the MIT Tech Review in 2022, AI systems have made it “stupidly easy to produce reams of misinformation, abuse, and spam, distorting the information we consume and even our sense of reality.” The “snowball of bullshit” she predicted is on the verge of becoming an avalanche of malarkey—malarkey that can derail elections, disrupt economies, and diminish trust in core institutions.

The right to reality will mark the latest in a series of novel rights set forth in response to a sizable jump in technology. Legally, the right may take root first in state constitutions that offer broader speech protections than the U.S. Constitution. From a policy perspective, public demand for more transparency as to the degree content has been altered may compel policymakers, platforms, and AI labs to take the necessary action to bring about the right to reality.

The marketplaces of ideas will fail if people cannot assess the origin of those ideas and the extent to which they have been altered by AI. If you’re not sold on this conception of the right, that’s fine; as a matter of fact, that’s great. A robust and inclusive dialogue about the global marketplaces of ideas is needed. Share your thoughts, propose your own idea, do whatever you think is best to ensure that regardless of how AI progresses, the “truth will prevail.”

Kevin Frazier is an Assistant Professor of Law at the Benjamin L. Crump College of Law at St. Thomas University and a Research Affiliate at the Legal Priorities Project. He previously served as a Judicial Clerk to Chief Justice Mike McGrath of the Montana Supreme Court. Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.