Redress for Victims of Generative AI: Copyright Infringement and Right of Publicity Claims

By: Simon Graf

Within the first month of 2024, a series of disturbing stories surfaced. They all had one thing in common: some flavor of generative artificial intelligence (AI) was used to produce content in which a public figure’s voice or likeness was featured without consent. The AI-generated products were circulated widely and rapidly—and in one case, recipients were specifically targeted for dissemination.

 

On January 9, 2024, the audio for an hour-long George Carlin comedy special was uploaded to YouTube. The problem is, Carlin died in 2008. The special, titled “George Carlin: I’m Glad I’m Dead,” began with a voiceover identifying itself as an AI engine used by the YouTube channel’s operators. It explained that “it listened to the comic’s 50 years of material and ‘did [its] best to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today.’” Carlin’s estate filed a lawsuit alleging copyright infringement and a violation of Carlin’s right of publicity (ROP).

 

Around the same time, several suspicious advertisements starring Taylor Swift began to circulate on social media. Addressing her fans, Swift explained that she was “thrilled” to be giving away free Le Creuset cookware sets. To claim the free cookware, she explained, viewers needed only click a button below the ad and answer some questions. Neither Taylor Swift nor Le Creuset was involved in the video’s production. In fact, the ads were a scam, superimposing an AI-generated version of Swift’s voice over video clips of the singer beside Le Creuset Dutch ovens.

 

On Sunday, January 21, 2024—two days before New Hampshire’s presidential primary—some voters received a phone call from President Biden. The President urged New Hampshire residents not to vote in the presidential primary, insisting that “[v]oting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.” The message even concluded by offering the phone number for a former New Hampshire Democratic Party chair. “[T]his message appears to be artificially generated based on initial indications,” the attorney general’s office said, describing the robocall as “an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.” Only a few days later, sexually explicit images of Taylor Swift generated by AI spread like wildfire across the Internet. Social media platforms scrambled to remove the images, but they were unable to keep up with circulation—one image was viewed forty-seven million times before the original poster’s account was suspended.

 

Victims of generative AI may pursue one or both of the following options: (1) a federal copyright infringement claim; or (2) a state right of publicity (ROP) violation claim.

 

Generative AI tools have become extremely popular, but as my colleague Molly Murray wrote, they often stand in tension with U.S. intellectual property law—in particular, copyright protections. The Copyright Act grants protection to “original works of authorship,” and ownership of that intellectual property is granted “initially in the author or authors of the work.”

 

What does that mean for AI-generated works? Courts have held that “human authorship is an essential part of a valid copyright claim.” This holding does not preclude wholesale the extension of copyright protections to works created by humans with the assistance of generative AI, but it might as well. Recent Copyright Registration Guidance suggests that the U.S. Copyright Office is unlikely to find human authorship in works generated by AI in response to text prompts. In other words, while it is theoretically possible to copyright a work produced through human interactions with an AI program, it has yet to happen, and the parameters for success are unclear. Functionally, anyway, this means for the time being that copyright protections are not extended to works produced by generative AI models.

 

But what about copyright infringement? There are two paths to pursue a copyright infringement claim. First, a person can pursue a copyright infringement claim by claiming that the AI system infringed on a copyrighted work by using it as input to train its AI model. AI systems are “trained” to create content by developing the algorithm with a vast amount of data of whatever type(s) the AI will generate (text, images, music, etc.). OpenAI, for example, explains that their training process relies upon “large, publicly available datasets that include copyrighted works.” AI companies lean on 17 U.S.C. § 107—the fair use doctrine—to justify training AI models with copyrighted works. Second, a person can also pursue a copyright infringement claim by claiming that the AI system infringed by generating output resembling a copyrighted work. Copyright owners may be able to prove infringement if the AI system (1) “actually copied” (or had access to) their works, and (2) created “substantially similar” outputs. Either path would be an uphill battle against wealthy AI companies, and for that reason, a state-level ROP violation claim may be a better option, if available.

 

The “right of publicity” is defined as the right to prevent unauthorized use of one’s name, image, or likeness (NIL), or other aspects of one’s identity, such as voice. It may be easier for an individual victim to succeed on an ROP claim because the criteria are specific to the plaintiff. A clear disadvantage, however, is that “[t]he ROP is not comprehensively protected by current federal laws.” At the state level, however, the ROP is recognized by at least thirty-five states. Protections vary between states—some are statutory, while others are common law. States also differ in the extent to which the ROP protects a person’s identity, whether the ROP is descendible, and whether the claim is available to all persons or only those with “commercially valuable” NIL (e.g., celebrities and other public figures). The ROP is often governed by the law of the state in which a person is domiciled, but this is not always true. Indiana’s ROP statute, for example, stipulates that it “applies to an act or event that occurs within Indiana, regardless of a personality’s domicile, residence, or citizenship.”

 

Generative AI tools are becoming increasingly accessible. AI companies erect “guardrails” to prevent users from generating content that violates intellectual property rights, but devoted communities continue to discover new loopholes and exploits. Not to mention, when a generative AI tool is open source, it is possible to produce infringing content by manually removing the “guardrails” from the source code. Ultimately, Congress should enact federal legislation to protect the ROP. It is unlikely that such legislation will end the unauthorized use of one’s identity in AI-generated content, but it is the first step to achieve uniform protection for all. Until that time, individual victims should investigate whether a cause of action exists at the state level.

Previous
Previous

The Quest for DABUS Continues: U.K Most Recent Country to Deny DABUS as an Inventor

Next
Next

The Digital Services Act and the American Web Space