Search:

Recent Posts

Popular Topics

Contributors

Archives

Legal developments in data, privacy, cybersecurity, and other emerging technology issues

The Legal Issues Surrounding Deepfakes

In April 2023, Kyland Young, a star from the popular reality TV show Big Brother, brought a right of publicity claim against NeoCortext, Inc., the developer of a deepfake software called Reface. See Young v. NeoCortext, Inc., 2:23-cv-02486 (C.D.CA filed Apr. 3, 2023). Young claimed that NeoCortext’s Reface, “which uses an artificial intelligence algorithm to allow users to swap faces with actors, musicians, athletes, celebrities, and/or other well-known individuals in images and videos,” violates California’s right of publicity law. Young’s case, which is still pending in the U.S. District Court for the Central District of California, raises important questions about deepfakes and their intersection with the law as it pertains to famous figures.

Deepfake technology is becoming more accessible to the average user and over time will improve and make deepfakes harder to detect, but the law and any recourse it would provide for those harmed by deepfakes is lagging behind. The outcome of the Young case will shed much needed light on how much protection, if any, celebrities will receive from deepfakes under the law.

This post provides some basic knowledge on deepfakes and outlines the tools that may be available to celebrities to combat deepfakes and the legal barriers to such vindication.

What’s a Deepfake?

Deepfakes are video clips and audio recordings that are fake but appear to be the real audio or visual content of a human person with the help of artificial intelligence. Generative Adversarial Networks (GANs), which use algorithms that “take as their inputs large datasets of images, sounds, or videos and work together to create a new image, sound, or video, which approximates those in the dataset but which is not a direct copy of them,” often play a role in the creation of Deepfakes. Deepfakes come in three general categories:

  • Face Swap: Replacing one’s face with another’s in photo or video content.
  • Lip Sync: Making another appear to say something they are not in audio or video content.
  • Puppet Technique: Making one move in ways they do not naturally in video.

The better the data the GAN has access to, the more believable the deepfake.

One sector of society that deepfakes have a great potential of being troublesome toward are public figures. Nonconsensual deepfakes that depict celebrities engaged in intimate acts are a particularly worrisome example of what the technology is capable of producing. Deepfakes of celebrities have been used in advertisements. It is possible that politicians could one day have to worry about deepfakes impacting their election chances.

Right of Publicity

One salient remedy that may be available to victims of deepfakes who are public figures is to claim that a deepfake violates their right of publicity, which is the “right to control the exploitation of their identity, including protection of their name, likeness and voice.” Since deepfakes can closely replicate a public figure’s voice or appearance, it would seem that an unauthorized deepfake would infringe upon their right of publicity. Furthermore, courts have recognized right of publicity claims for virtual recreations (No Doubt v. Activision Publ’g, Inc., 122 Cal. Rptr. 3d 397, 411-412 (Cal. App. 2 Dist. 2011)) and drawings (Hilton v. Hallmark Cards, 599 F.3d 894, 912-913 (9th Cir. 2010)) of public figures, which suggests that a deepfake need not be a perfect recreation of likeness to be subject to a right of publicity claim. In addition to individuals, the third parties who own a public figures’ publicity rights might be able to bring a right of publicity action. Comedy III Productions, Inc. v. Gary Saderup, Inc., 21 P.3d 797, 811 (Cal. 2001).

While states differ on how they approach the right of publicity, typically a plaintiff must prove that their identity has some recognizable commercial value and that the defendant, without the plaintiff’s permission, used the defendant’s identity in a commercial manner. This could be a significant roadblock to successful right of publicity claims when the deepfake in question is used without commercial intent, such as for harassment or reputational harm, which can be just as, if not more, devastating than an obvious commercial use like an advertisement. Furthermore, “deepfake creators often have a First Amendment defense in civil claims against them.” Shannon Reid, The Deepfake Dilemma: Reconciling Privacy and First Amendment Protections, 23 U. Pa. J. Const. L. 209, 211 (2021). Since a deepfake might be deemed a form of “protected speech,” (Id. at 15), a defendant might argue that a deepfake is sufficiently “transformative,” meaning that the person’s voice or appearance used in the deepfake is for purposes such as “parody, satire or commentary,” and should be protected by the First Amendment.

Other remedies that could protect public figures from deepfakes are tort claims such as intentional infliction of emotional distress, defamation, false light and harassment. More cases testing these theories in the deepfake context are needed to better assess their utility.

Importantly, a difficulty that might confront a public figure is determining who to hold liable for a particular deepfake. As “many deepfakes are uploaded anonymously,” an individual’s “only remedy may be against a website owner” if the creator cannot be located. However, Section 230 of the Communications Act generally immunizes websites from claims arising out of material posted on their platform by the platform’s users. A website could be held liable if it was “responsible, in whole or in part, for the creation or development” (47 U.S.C. § 230)) of the deepfake, but that is unlikely to be true in the vast majority of instances. While Section 230 does not protect websites from intellectual property claims, courts are divided on whether the right of publicity is properly recognized as an intellectual property right.

Trademark

A trademark claim for false endorsement may be utilized by celebrities if a deepfake could lead viewers to think that an individual endorses a certain product or service. Section 43(a)(1)(A) of the Lanham Act has been interpreted by courts to limit the nonconsensual use of one’s “persona” and “voice” that leads consumers to mistakenly believe that an individual supports a certain service or good. For example, actor Woody Allen successfully brought a Lanham Act claim against a defendant who used a look-alike of Allen in an advertisement. Allen v. National Video, Inc., 610 F.Supp. 612 (D.C.N.Y., 1985). Deepfakes, which have the ability to go beyond “look-alike” and resemble the real thing, would seem to be captured by the Lanham Act as well.

The Lanham Act’s focus on “likelihood of confusion,” however, may hinder clams against deepfakes “that are harmful to one’s dignity but unlikely to confuse their viewers.” Something as simple as a disclaimer might be enough for a defendant to overcome a Lanham Act claim. Furthermore, much like the right of publicity, the deepfake would need to have some commercial intent or aspect to be captured by the Act. Importantly, “no court has held that Section 230 bars false association claims,” which means that individuals could hold websites and other internet service providers liable for deepfakes that involve false endorsements, even when the creator is anonymous.

Copyright

Since copyright owners “have the exclusive right to produce and reproduce their work in any material form,” deepfakes that use copyrighted material could be subject to copyright infringement claims. Additionally, social media platforms could be required to take down deepfakes that infringe on copyright pursuant to the Digital Millennium Copyright Act. However, there are some notable limitations. First, celebrities “are unlikely to be the copyright owners of the images or videos that form the basis of the disputed deepfake,” and only the copyright owner may bring a copyright infringement claim. For example, Vogue successfully removed a deepfake of Kim Kardashian from a website where the deepfake used material from a video by Vogue. Second, deepfake creators may allege “fair use,” a doctrine that protects copying when done in a “transformative” way, such as for “comment, criticism and news reporting.” Nevertheless, use of the Copyright Act may be a means for recourse in improper use of deepfakes.

State-Law

A handful of states have passed statutes that try to limit the harm of deepfakes to varying degrees. Hawaii, Virginia, Texas and Wyoming have criminalized more obscene deepfakes, while California and Texas allow for civil actions. Texas and California also have laws that restrict deepfakes that could impact political campaigns.

Conclusion

Deepfakes have the potential to impact many areas of society. From spoofed cell phone calls to false declarations of war by world leaders, there is a serious cause for concern and a pressing need for action. This is not to say that the technology is without its merits, as it has been used for educational purposes and to reach across language barriers, but whether the good can outweigh the bad remains to be seen. Honigman’s Data, Privacy and Cybersecurity group will continue to monitor the development of deepfakes and the legal issues surrounding them.

  • Danielle F. Bass
    Partner

    Danielle Bass is a partner in the Technology Transactions and Data, Privacy and Cybersecurity Practice Groups who focuses her practice on transactional matters involving information technology, intellectual property, data ...

    |
Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.