Deepfakes: Is Your Company Helpless?

Deepfakes: Is Your Company Helpless?

A competitor posts a damaging deepfake video about your company on Twitter. Coupled with the video is a defamatory statement about your CEO. What, if anything, can you do about it? That’s what this article will address. Think this is a hypothetical? Recently, MIT Tech Review covered the use of deepfake photos concerning Amazon’s workplace practices.

Section 230 of the Communications Decency Act of 1996 (“Section 230”) has been a wide-ranging piece of legislation. Before 230, online forums and other digital locations could be held liable under the common law for knowingly—or intentionally—allowing third parties to post defamatory material. Section 230 fundamentally changed the landscape. This article will provide an overview of 230, its scope, potential reforms, and practical considerations for executives at companies who are negatively affected by such content. After presenting (1) an introductory hypothetical, I go on to explain: (2) deepfakes and fake news; (3) liability of companies like Twitter after Section 230; and (4) some practical considerations on how to defend your company against deepfakes and defamation posted on social media. 

(1) Rip-Off Report Hypo

Assume you own a franchise of Acme Coffee Company (“Acme”). Acme is publicly traded. Ms. Jane Austin posts on Rip Off Report about your store: “It’s a fraud! All of its coffee is actually old Substandard Coffee Company decaf. Don’t go there.” Ms. Austin even puts up a video of your employees drinking Substandard decaf and giggling as they use the company’s vacuum-packed moldy beans—which look and taste awful! —to make the coffee for your customers. The Rip Off Report post goes viral—hashtags begin on Twitter: “#boycottacme!” Assume the Rip Off report post is untrue and that the video is a technologically manufactured fake. Nonetheless, your business suffers. Acme stock takes a tailspin. Can you get the post removed from social media? Can you sue Ms. Austin for defamation? What about federal criminal securities fraud?

 (2) Fake News / Deep Fakes

A producer of 60 Minutes defines fake news as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people.” Satire isn’t considered fake news. One example: “Planet Mars is headed straight for Earth.” An MIT Sloan article, Deepfakes, Explained, defines deep fakes as a “specific kind of synthetic media where a person in an image or video is swapped with another person’s likeliness.” A deep fake can either be a swap of an image or voice or manufactured anew. Recently, for example, there was a deep fake video of Mark Zuckerberg on Instagram saying “whoever controls the data, controls the future.”

 (3) Social Media Liability After-Section 230

Two parts of Section 230 are of utmost importance. The first, 47 U.S.C. Section 230(c), protects actions by an “interactive computer service” to “restrict access to or availability of material that the provider considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected[.]” The second, 47 U.S.C. § 230(f)(2), defines “interactive computer service” to include any “information service . . . or access software provider that . . . enables computer access by multiple users to a computer server[.]”

“Interactive computer service” has been interpreted to include not only expected forums like Twitter or Facebook but also online matchmaking. In Carafano v. Metrosplash.com, Inc., for example, a 2003 decision from a federal appeals court in California, defendant Matchmaker had a user post a fake profile and provide false content in order to harass a well-known actress. However, Matchmaker was still immune under 230 for invasion of privacy, negligence, and defamation. The phrase has also been expanded to employer provided e-mail service. In Delfino v. Aglient Technologies, Inc., a California appeals court held in 2006 that an employer is protected by section 230 even though an employee uses work related e-mail to send harassing e-mails.

Once a company is deemed to run such a service, Section 230 has been interpreted to basically preempt various state causes of action including defamation, negligence, and contract. Companies running an interactive computer service can also find succor under Section 230 even from federal claims including federal civil rights. In Noah v. AOL Time Warner, Inc., a 2003 federal district court decision from Virginia, a Muslim plaintiff’s claim against AOL for violating federal law concerning the prohibition against discrimination in places of public accommodation, in light of threatening or blasphemous postings about Muslims on AOL, was barred by Section 230.

That being said, copyright and related claims concerning trademark infringement are exempted per Section 230(e)(2). This means that even an interactive computer service can be vicariously liable for infringing copyrights or trademarks posted on sites like Twitter. In that case, regular analysis under the Copyright Act – including fair use – will apply. If Ms. Turner infringed one of your company’s trademarks in her deepfake video, or otherwise infringed a company copyright by, say, sampling a song licensed to you, then copyright infringement analysis would apply.

It is of utmost importance, under the statute, that a company like Twitter does not cross the line between merely keeping the post up or taking it down and actually modifying the message or posting. In Zeran v. America Online, a 1997 federal appeals court decision, the court held that section 230 protects AOL from liability for refusing to take down user’s post of a private phone number of an individual along with defamation. Likewise, in Batzel v. Smith, a federal appeals court in California held that a moderator of listserv was protected under § 230 so long as modifications to the e-mail by a third party retain the “basic form and message.” Thus, if Twitter takes your Tweet and inserts “Apple” instead of “IBM,” this could remove the company from the protective umbrella of Section 230 if the Tweet would then become defamatory or otherwise violate federal law.

Thus, Ms. Austin may be liable under either common law or Section 230 for her false accusations about Acme. While Twitter could be liable under common law, the company would clearly not be liable under Section 230.

 (4) Practical Considerations

In light of the foregoing, there are various strategies your company can take to protect it from malicious deepfakes or fake news posted by competitors. One is preemptive use of artificial intelligence (“AI”) based software. Some software in the marketplace like Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, this type of software prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, the software contacts the publisher or platform. The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be further countered by AI. Oftentimes, AI generated fake news uses predictable combinations of words in its syntax. As a result, some software utilizes these types of searches to ferret out content suspected to be fake

The other strategy is corrective. After identifying the content, the next step could be getting it taken down or at least flagged by the host. If the host does not qualify as an “interactive computer service,” then they may be liable for the deepfake or defamation just as must as your competitor who posted it. This gives your company leverage in the negotiation. Even if the host is protected by Section 230, such as in the case of Twitter, you may still contact the person or entity who posted the content. A request for a retractive statement on the platform, such as a curing Tweet, could also prove effective.

A version of this article has been published in the PLI Chronicle. See https://plus.pli.edu/Details/Details?fq=id:(321454-ATL1)&referrer=. (C) by Mr. Harald Krichel; https://creativecommons.org/licenses/by-sa/3.0/legalcode

Digital counterfeits — sometimes protected.

Digital counterfeits — sometimes protected.

Fake news. You’ve heard about it. “Deep fakes” are cousins of fake news — they are modified videos, photos, or recordings that are made to look original — but aren’t. Both are counterfeits of the originals.

What if robots or bots programmed with artificial intelligence (“AI”) can eliminate all or most of fake news and deep fake content online? Even if AI could do this, federal law may still protect the existence of such counterfeit content on, say, Twitter. Meanwhile, federal law doesn’t protect counterfeit currencies or fashion knockoffs from, say, Ralph Lauren.

That’s the subject of Long & Associates principal Ryan E. Long’s talk for The Stanford Center for Legal Informatics (a/k/a “Code X”). It’s a multidisciplinary lab run by Stanford Law School and the Stanford Computer Science Department. To view the talk, please click HERE.