Kodak to Deepfakes: Publicity Rights and Abuse of Our Likenesses

As increasingly realistic deepfakes depict us against our will, a century-old law originally made for cameras may offer a solution.

Kodak to Deepfakes: Publicity Rights and Abuse of Our Likenesses
Scan of a person’s face. (Tero Vesalainen/Shutterstock/OrboGraph, https://orbograph.com/deepfakes-and-check-fraud-what-is-the-connection/; CC BY-NC 4.0, https://creativecommons.org/licenses/by-nc/4.0/).

In the past few weeks, news about Grok—xAI’s generative artificial intelligence (AI) tool that is integrated into social media platform X—has taken the internet by storm. With few safeguards, Grok has allowed users to easily remove the clothes of individuals in photos. Unsurprisingly, users have abused this feature, posting AI-generated photos of scantily clad classmates, colleagues, and celebrities to X and other social media platforms.

Deepfakes are not a new phenomenon, but Grok’s ability to generate sexualized deepfakes and its integration into X combined to generate millions of sexualized deepfakes on one of the world’s most popular social media platforms. This scenario is becoming increasingly common. As generative AI technology has improved significantly in the past few years, the models have facilitated deepfakes that are increasingly realistic, affordable, and versatile. Although many people associate deepfakes with these types of intimate images, deepfakes can be realistic depictions of individuals doing anything. Their existence increasingly blurs the line between fiction and reality.

While intimate deepfakes harm victims, so do deepfakes that manipulate media of victims to articulate opinions that are not their own or falsely suggest connections with individuals, brands, and products. In one recent nonintimate deepfake case, an educator in Baltimore, Maryland, almost lost their job and faced death threats over an audio deepfake of them shouting racist slurs. In a more high-profile instance, Taylor Swift had to reveal her political opinions to counter a deepfake of her endorsing Donald Trump for president and to protect her reputation. And a deepfake of the governor of Maine that depicts her giving hormones to children twisted her policies and her ability to speak on her own behalf. The modern internet has allowed these AI-generated deepfakes to spread like wildfire. Everyone from Oprah Winfrey to middle school students has been a victim.

Last year, spurred by some of these cases, Congress enacted the TAKE IT DOWN Act to counter the spread of deepfakes. The law criminalizes the knowing publication of intimate images—including AI-generated ones. It also requires online platforms to remove intimate images within 48 hours after receiving notice from the victim and make reasonable efforts to identify other identical copies of the image. While many experts have lauded the spirit of the law, they have also criticized it for ignoring free speech, encouraging censorship, and not providing a private cause of action against platforms that shirk their duties. Many observers have concluded that the law is likely unconstitutional due to its overbreadth and lack of free speech exceptions. The law also covers only intimate deepfakes, despite the apparent harms of other deepfakes.

There is a better path that does not require Congress to try (with limited success) to reinvent the wheel. Over a century ago, another law emerged in response to similar concerns about a new image-reproducing technology. In a recently published article in the Arizona State Law Journal, I propose that the right of publicity could once again be an invaluable defense against technology that circumvents control over our likenesses.

The Kodak and the Right of Publicity’s Origins

At the end of the 19th century, George Eastman released the first portable camera, the Kodak. Some were excited by this technological marvel, but many reacted in horror. Before Kodak, individuals had some expectation of privacy—even in public spaces—because their actions could only be described rather than captured physically on film. Photography’s evolution from unwieldy cameras for professional portrait daguerreotypes to a compact, consumer-friendly product shattered this perception. Newspapersmoralists, and even a president denounced the new technology’s dangers. They were concerned not just with others taking their photo but also with the ability of the photographer to share it across the community. The development of mass media during the 19th century allowed those images to be shared not just with immediate neighbors but with millions across the nation. 

Amid these concerns, Samuel Warren and Louis Brandeis—the future U.S. Supreme Court justice—responded by writing their 1890 Harvard Law Review article, “The Right to Privacy.” As with many at that time, they were concerned about the combined ability of the portable camera and mass media to reveal our most private lives. In their article, Warren and Brandeis proposed a new legal right to privacy that would ultimately form the right of publicity. This right would not only help prevent uncompensated economic uses of our likenesses but also prevent other uses that harm human dignity. Warren and Brandeis remarked that photos could cause “mental pain and stress … far greater than could be inflicted by mere bodily injury.” The law review article has proved influential. Today, it is one of the most famous and cited law review articles ever.

The public’s fear about the rise of deepfakes echoes Warren and Brandeis’s concerns. Both the camera and AI allow anyone to capture another’s likeness. Meanwhile, mass media and the internet share content with millions. Of course, the power and scope of AI and the internet magnify the possibilities and harms. No longer are images limited to what has occurred or to a geographically limited readership. AI tools allow us to create deepfakes of anyone doing practically anything. The creators can then share those deepfakes online, where practically anyone can access them.

The Limits of Anti-deepfake Claims

Warren and Brandeis’s approach in 1890 suggests two important lessons for addressing the harms of today’s deepfakes. First, the law must address not just the creation of one’s likeness but also its dissemination. Second, a solution must be attuned to the types of harms at stake. Here, that means recognition of and damages for both economic and dignitary harms. Deepfakes can lead to loss of employment opportunities or may use one’s commercial likeness without compensation. But deepfakes may also strip us of control over our likeness and harm our reputations, making it difficult to engage in society fully and not feel related shame.

Politicians, academics, and think tanks have proposed a variety of laws to help restrict the spread of deepfakes. However, most of these laws fall short because they fail to address one or both of these two lessons.

The greatest risk of dissemination is through online platforms. Yet Section 230 of the Communications Decency Act immunizes online platforms from liability for most of their users’ torts. This means that platforms cannot be held liable for defamation, intentional infliction of emotional distress, false light, or other promising causes of action, incentivizing them to maintain the deepfake content.

Some causes of action are not blocked by Section 230, but these are not conceptual matches for the range of harms that deepfakes could inflict. Laws against child sexual abuse material (CSAM) could potentially be applicable, but they are limited to sexualized deepfakes of children. Section 230 excludes intellectual property claims, allowing victims to potentially raise copyright or trademark infringement claims against the creator and the hosting platform. However, these claims would require victims to have rights in copyrighted material or a trademark, which seems unlikely in most cases.

With the exception of selfies, the copyright-owning photographer is typically someone else. Even if we did have the rights, copyrights in photographs tend to be very narrow and protect only the exact or a substantially similar photo, not our likenesses. Very few people have a trademark in their likeness, and even then, it is usually only for specific depictions of one’s likeness, such as the registered trademark for a video clip of Matthew McConaughey saying “Alright, alright, alright.” Furthermore, both claims are focused primarily on economic harm, not dignitary harms.

Restoring the Right of Publicity

In between claims blocked by Section 230 and generally inapplicable claims, the right of publicity offers an overlooked third path that can both curtail dissemination and target the most harmful deepfakes. The right of publicity grants individuals some control over the use of their name, image, and likeness. Generally speaking, a right of publicity claim requires (a) the defendant to have used the plaintiff’s identity, (b) for the defendant’s (commercial) advantage, (c) without consent, and (d) resulting in an injury. Unlike the TAKE IT DOWN Act, the right of publicity is not limited to intimate images, but can cover all misappropriations of one’s likeness.

First, the right of publicity could be considered an intellectual property right and thus be excluded from Section 230. Courts are split on whether Section 230 should exclude the right of publicity under the intellectual property carve-out. Yet the prevailing definition of the right of publicity is that it is a type of intellectual property. (A federal right of publicity law, such as the NO FAKES Act, could perhaps clarify that the right of publicity is an intellectual property law for purposes of Section 230.) While scholars have warned about the dangers of a full-throated endorsement of this relationship, recognizingthis reality[1]  would require platforms to remove deepfakes or risk being held liable for violating victims’ right of publicity. This would encourage platforms to adopt a notice-and-takedown regime similar to those that currently exist for copyright and trademark infringements. As with copyright and trademark, individuals could submit takedown notices directly to the platforms. Once platforms learn that the user-generated content is unlawful, they must remove it or risk being held liable.

While the TAKE IT DOWN Act also includes a notice-and-takedown regime, it lacks the First Amendment protections that a right of publicity regime could provide. Under United States v. Alvarez, the First Amendment protects false speech. A deepfake commenting on a politician’s policies or a parody of a pop culture figure would often present strong cases of lawful free speech. Courts regularly employ several different tests to balance free speech interests with individuals’ rights of publicity. These built-in First Amendment protections offer a more cautious approach than mandatory removal under the TAKE IT DOWN Act. The intellectual property notice-and-takedown regimes also permit counternotices to platforms, which allows them to consider the reported party’s account. Together, these offer a more robust framework for facilitating takedowns while protecting lawful speech.

The victims can also bring individual claims against platforms for right of publicity violations. Enforcement under the TAKE IT DOWN Act is limited to government enforcement. In similar settings, lacking a private cause of action has resulted in few cases being brought. In addition, a private cause of action allows the victim to direct the litigation.

Second, if we recognize and (to the extent necessary) restore the original understanding of the right of publicity, it can consider both economic and dignitary harms. 

As previously mentioned, Warren, Brandeis, and others at the turn of the past century were concerned about dignitary harms stemming from unauthorized uses of one’s likeness. Over the past century, the right of publicity has become increasingly associated with economic rights. This was especially true after 1977, in which the lone Supreme Court case on the right of publicity, Zacchini v. Scripps-Howard Broadcasting, referred to it as an intellectual property right and in economic terms. The right of publicity grew distant from the dignitary purpose it had during Warren and Brandeis’s time. Many scholars have previously called for a restoration of the right of publicity’s dignitary purpose, but largely to no avail.

Nonetheless, the right has never been fully severed from its dignitary aims. Some jurisdictions, such as California, have remained cognizant of dignitary harms and have adopted right of publicity tests that do not require a commercial use, but rather a more general “advantage” to the user.

Deepfakes offer an opportunity to restore dignity to the right of publicity. Heightened concerns about protecting one’s likeness from the various harms that deepfakes possess present a compelling case for restoring dignity to the right of publicity and removing the explicit commercial use requirement.

Indeed, today’s situation amplifies the harms of 1890. Insights from sociology stress why we should control our image and present ourselves to the world on our own terms to socialize in a process of “dramatic realization.” While the camera could surreptitiously capture someone’s actual image, deepfakes can replicate one’s likeness doing any manner of things they had never even contemplated, let alone did. This further erodes individuals’ waning control over their image and can amplify the risk of reputational harm and social isolation. 

Economic theory alone cannot address dignitary harms. Other scholars have explained how economic loss offers an incomplete remedy when dignitary harms are also present. If we ignore the dignitary harms deepfakes can inflict, such as teachers being associated with racist beliefs, we can offer only a hollow solution.

Limiting violations under the right of publicity to commercial uses would also effectively limit protection to the rich and famous. Others have argued that protecting only famous people with commercial value in their likenesses is inequitable. Deepfakes underscore this reality. Advances in AI tools mean practically anyone can make a deepfake of anyone doing anything. Harmful deepfakes can affect everyone. Democrats and Republicans, rich and poor, Black and white, teachers and celebrities—all have been victims of deepfakes. The dignitary harm exists regardless of the commercial value of one’s likeness.

Commercial use also overlooks the role of intent. Especially in the deepfake context, the choice of victim is intentional. The choice may be for commercial reasons, such as having a celebrity appear to endorse a product. It could also be for reasons including prurience, revenge, or schadenfreude. It is rare that the choice does not confer some psychological or material advantage on the creator.

Conclusion

While the world of AI-generated deepfakes and the internet would have been unimaginable to Warren and Brandeis, their concerns about the combined dangers of reproductive and sharing technologies were prescient. The parallels between the Kodak camera and deepfakes offer a valuable historical model for responding to the current moment. By navigating platform liability and dignitary harms, the right of publicity may offer the best hope for meaningfully curtailing deepfakes.

– Michael Goodyear, Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2026 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.