Client Alert

The Collision of AI’s Machine Learning and Manipulation: Deepfake Litigation Risks to Companies from a Product Liability, Privacy, and Cyber Standpoint

10 Feb 2021

AI and machine-learning advances have made it possible to produce fake videos and photos that seem real, commonly known as “deepfakes.” Deepfake content is exploding in popularity.[i] After Star Wars: The Rise of Skywalker used visual effects with historical footage to create a visage of Carrie Fischer on the screen, fans generated competing deepfake videos through artificial intelligence models. Using thousands of hours of interviews with Salvador Dali, the Dali Museum of Florida created an interactive exhibit featuring the artist.[ii] For Game of Thrones fans miffed over plot holes in the season finale, Jon Snow can be seen profusely apologizing in a deepfake video that looks all too real.[iii]

Deepfake technology—how does it work? From a technical perspective, deepfakes (also referred to as synthetic media) are made from artificial intelligence and machine-learning models trained on data sets of real photos or videos. These trained algorithms then produce altered media that looks and sounds just like the real deal. Behind the scenes, generative adversarial networks (GANs) power deepfake creation.[iv] With GANs, two AI algorithms are pitted against one another: one creates the forgery while the other tries to detect it, teaching itself along the way. The more data is fed into GANs, the more believable the deepfake will be. Researchers at academic institutions such as MIT, Carnegie Mellon, and Stanford University, as well as large Fortune 500 corporations, are experimenting with deepfake technology.[v] Yet deepfakes are not solely the province of technical universities or AI product development groups. Anybody with an internet connection can download publicly available deepfake software and crank out content.[vi]

Deepfake risks and abuse. Deepfakes are not always fun and games. Deepfake videos can phish employees to reveal credentials or confidential information, e-commerce platforms may face deepfake circumvention of authentication technologies for purposes of fraud, and intellectual property owners may find their properties featured in videos without authorization. For consumer-facing online platforms, certain actors may attempt to leverage deepfakes to spread misinformation. Another well-documented and unfortunate abuse of deepfake technology is for purposes of revenge pornography.[vii]

In response, online platforms and consumer-facing companies have begun enforcing limitations on the use of deepfake media. Twitter, for example, announced a new policy within the last year to prohibit users from sharing “synthetic or manipulated media that are likely to cause harm.” Per its policy, Twitter reserves the right to apply a label or warning to Tweets containing such media.[viii] Reddit also updated its policies to ban content that “impersonates individuals or entities in a misleading or deceptive manner” (while still permitting satire and parody).[ix] Others have followed. Yet social media and online platforms are not the only industries concerned with deepfakes. Companies across industry sectors, including financial and healthcare, face growing rates of identity theft and imposter scams in government services, online shopping, and credit bureaus as deepfake media proliferates.[x]

Deepfake legal claims and litigation risks. We are seeing legal claims and litigation relating to deepfakes across multiple vectors:

1. Claims brought by those who object to their appearance in deepfakes. Victims of deepfake media sometimes pursue tort law claims for false light, invasion of privacy, defamation, and intentional infliction of emotional distress. At a high level, these overlapping tort claims typically require the person harmed by the deepfake to prove that the deepfake creator published something that gives a false or misleading impression of the subject person in a manner that (a) damages the subject’s reputation, (b) would be highly offensive to a reasonable person, or (c) causes mental anguish or suffering. As more companies begin to implement countermeasures, the lack of sufficient safeguards against misleading deepfakes may give rise to a negligence claim. Companies could face negligence claims for failure to detect deepfakes, either alongside the deepfake creator or alone if the creator is unknown or unreachable.

2. Product liability issues related to deepfakes on platforms. Section 230 of the Communications Decency Act shields online companies from claims arising from user content published on the company’s platform or website. The law typically bars defamation and similar tort claims. But e-commerce companies can also use Section 230 to dismiss product liability and breach of warranty claims where the underlying allegations focus on a third-party seller’s representation (such as a product description or express warranty). Businesses sued for product liability or other tort claims should look to assert Section 230 immunity as a defense where the alleged harm stems from a deepfake video posted by a user. Note, however, the immunity may be lost where the host platform performs editorial functions with respect to the published content at issue. As a result, it is important for businesses to implement clear policies addressing harmful deepfake videos that broadly apply to all users and avoid wading into influencing a specific user’s content.

3. Claims from consumers who suffer account compromise due to deepfakes. Multiple claims may arise where cyber criminals leverage deepfakes to compromise consumer credentials for various financial, online service, or other accounts. The California Consumer Privacy Act (CCPA), for instance, provides consumers with a private right of action to bring claims against businesses that violate the “duty to implement and maintain reasonable security procedures and practices.”[xi]  Plaintiffs may also bring claims for negligence, invasion of privacy claims under common law or certain state constitutions, and state unfair competition or false advertising statutes (e.g., California’s Unfair Competition Law and Consumers Legal Remedies Act).

4. Claims available to platforms enforcing Terms of Use prohibitions of certain kinds of deepfakes. Online content platforms may be able to enforce prohibitions on abusive or malicious deepfakes through claims involving breach of contract and potential violations of the Computer Fraud and Abuse Act (CFAA), among others. These claims may turn on nuanced issues around what conduct constitutes exceeding authorized access under the CFAA, or Terms of Use assent and enforceability of particular provisions.

5. Claims related to state statutes limiting deepfakes. As malicious deepfakes proliferate, several states such as California, Texas, and Virginia have enacted statutes prohibiting their use to interfere with elections or criminalizing pornographic deepfake revenge video distribution.[xii] More such statutes are pending.

Practical tips for companies managing deepfake risks. While every company and situation is unique, companies dealing with deepfakes on their platforms, or as a potential threat vector for information security attacks, can consider several practical avenues to manage risks:

  • Terms of Use development: Companies can consider a variety of approaches to incorporating acceptable usage boundaries on their platforms, including the following:
    • Craft Terms of Use with specific guidelines that define the scope of deepfakes, such as prohibiting synthetic or manipulated media that violates any applicable law or could cause harm to an individual.
    • Update Terms of Use to capture deepfakes as a violation and provide enforcement mechanisms against users (e.g., removal procedures and account suspension or bans).
    • Monitor how regulatory bodies and peer companies define and enforce violations to determine what constitutes a harmful deepfake and ensure Terms of Use are consistently applied.
    • Consider whether to rely on Terms of Use to remove only reported violations or whether to implement a policy to proactively monitor and remove violations. Carefully weigh the potential risks for each option; failing to follow through on a policy creates exposure as well.
  • Technical approaches: Using a fire fighting fire approach, companies can leverage AI-powered detection algorithms, such as facial recognition technology (subject to applicable legal requirements), to detect manipulated media. Implementing multifactor authentication and deploying behavior analytics technologies (which may also be AI-driven) can guard against account takeover. Keeping up with technical advancements as well as security alerts from law enforcement and government agencies can reduce exposure to class action litigation or regulatory enforcement associated with deepfakes.
  • Law enforcement cooperation: Companies can serve their users and their own interests through proactive outreach and cooperation with law enforcement on deepfake issues. Law enforcement officials at various agencies have increasingly engaged the private sector to combat malicious threat actors that leverage deepfakes for criminal ends.
  • Civil enforcement. Companies can develop an enforcement program involving a spectrum of action, including account restrictions, pre-litigation outreach, and appropriate escalation to civil litigation as needed.  Such enforcement programs can help to reduce and deter online platform abuse.

While the future of deepfakes is uncertain, it is apparent that the underlying AI and machine-learning technology is very real and here to stay—presenting both risks and opportunity for organizations across industries.

About MoFo AI

MoFo provides sophisticated, full-service representation to a wide range of clients with companies that both develop and use cutting-edge, rapidly changing technology in the AI and robotics space. Our clients span the spectrum from startups to Silicon Valley industry leaders to global Fortune 100 power players. They rely on our cross-practice, integrated approach to all aspects of their business operations, whether it’s protecting valuable intellectual property or navigating tricky regulatory hurdles.

Visit our AI webpage to learn more and to read more of our thought leadership in this space.











[xi] Cal. Civ. Code § 1798.150.




Unsolicited e-mails and information sent to Morrison & Foerster will not be considered confidential, may be disclosed to others pursuant to our Privacy Policy, may not receive a response, and do not create an attorney-client relationship with Morrison & Foerster. If you are not already a client of Morrison & Foerster, do not include any confidential information in this message. Also, please note that our attorneys do not seek to practice law in any jurisdiction in which they are not properly authorized to do so.