Deepfakes: Is Your Company Helpless?

Deepfakes: Is Your Company Helpless?

A competitor posts a damaging deepfake video about your company on Twitter. Coupled with the video is a defamatory statement about your CEO. What, if anything, can you do about it? That’s what this article will address. Think this is a hypothetical? Recently, MIT Tech Review covered the use of deepfake photos concerning Amazon’s workplace practices.

Section 230 of the Communications Decency Act of 1996 (“Section 230”) has been a wide-ranging piece of legislation. Before 230, online forums and other digital locations could be held liable under the common law for knowingly—or intentionally—allowing third parties to post defamatory material. Section 230 fundamentally changed the landscape. This article will provide an overview of 230, its scope, potential reforms, and practical considerations for executives at companies who are negatively affected by such content. After presenting (1) an introductory hypothetical, I go on to explain: (2) deepfakes and fake news; (3) liability of companies like Twitter after Section 230; and (4) some practical considerations on how to defend your company against deepfakes and defamation posted on social media. 

(1) Rip-Off Report Hypo

Assume you own a franchise of Acme Coffee Company (“Acme”). Acme is publicly traded. Ms. Jane Austin posts on Rip Off Report about your store: “It’s a fraud! All of its coffee is actually old Substandard Coffee Company decaf. Don’t go there.” Ms. Austin even puts up a video of your employees drinking Substandard decaf and giggling as they use the company’s vacuum-packed moldy beans—which look and taste awful! —to make the coffee for your customers. The Rip Off Report post goes viral—hashtags begin on Twitter: “#boycottacme!” Assume the Rip Off report post is untrue and that the video is a technologically manufactured fake. Nonetheless, your business suffers. Acme stock takes a tailspin. Can you get the post removed from social media? Can you sue Ms. Austin for defamation? What about federal criminal securities fraud?

 (2) Fake News / Deep Fakes

A producer of 60 Minutes defines fake news as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people.” Satire isn’t considered fake news. One example: “Planet Mars is headed straight for Earth.” An MIT Sloan article, Deepfakes, Explained, defines deep fakes as a “specific kind of synthetic media where a person in an image or video is swapped with another person’s likeliness.” A deep fake can either be a swap of an image or voice or manufactured anew. Recently, for example, there was a deep fake video of Mark Zuckerberg on Instagram saying “whoever controls the data, controls the future.”

 (3) Social Media Liability After-Section 230

Two parts of Section 230 are of utmost importance. The first, 47 U.S.C. Section 230(c), protects actions by an “interactive computer service” to “restrict access to or availability of material that the provider considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected[.]” The second, 47 U.S.C. § 230(f)(2), defines “interactive computer service” to include any “information service . . . or access software provider that . . . enables computer access by multiple users to a computer server[.]”

“Interactive computer service” has been interpreted to include not only expected forums like Twitter or Facebook but also online matchmaking. In Carafano v. Metrosplash.com, Inc., for example, a 2003 decision from a federal appeals court in California, defendant Matchmaker had a user post a fake profile and provide false content in order to harass a well-known actress. However, Matchmaker was still immune under 230 for invasion of privacy, negligence, and defamation. The phrase has also been expanded to employer provided e-mail service. In Delfino v. Aglient Technologies, Inc., a California appeals court held in 2006 that an employer is protected by section 230 even though an employee uses work related e-mail to send harassing e-mails.

Once a company is deemed to run such a service, Section 230 has been interpreted to basically preempt various state causes of action including defamation, negligence, and contract. Companies running an interactive computer service can also find succor under Section 230 even from federal claims including federal civil rights. In Noah v. AOL Time Warner, Inc., a 2003 federal district court decision from Virginia, a Muslim plaintiff’s claim against AOL for violating federal law concerning the prohibition against discrimination in places of public accommodation, in light of threatening or blasphemous postings about Muslims on AOL, was barred by Section 230.

That being said, copyright and related claims concerning trademark infringement are exempted per Section 230(e)(2). This means that even an interactive computer service can be vicariously liable for infringing copyrights or trademarks posted on sites like Twitter. In that case, regular analysis under the Copyright Act – including fair use – will apply. If Ms. Turner infringed one of your company’s trademarks in her deepfake video, or otherwise infringed a company copyright by, say, sampling a song licensed to you, then copyright infringement analysis would apply.

It is of utmost importance, under the statute, that a company like Twitter does not cross the line between merely keeping the post up or taking it down and actually modifying the message or posting. In Zeran v. America Online, a 1997 federal appeals court decision, the court held that section 230 protects AOL from liability for refusing to take down user’s post of a private phone number of an individual along with defamation. Likewise, in Batzel v. Smith, a federal appeals court in California held that a moderator of listserv was protected under § 230 so long as modifications to the e-mail by a third party retain the “basic form and message.” Thus, if Twitter takes your Tweet and inserts “Apple” instead of “IBM,” this could remove the company from the protective umbrella of Section 230 if the Tweet would then become defamatory or otherwise violate federal law.

Thus, Ms. Austin may be liable under either common law or Section 230 for her false accusations about Acme. While Twitter could be liable under common law, the company would clearly not be liable under Section 230.

 (4) Practical Considerations

In light of the foregoing, there are various strategies your company can take to protect it from malicious deepfakes or fake news posted by competitors. One is preemptive use of artificial intelligence (“AI”) based software. Some software in the marketplace like Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, this type of software prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, the software contacts the publisher or platform. The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be further countered by AI. Oftentimes, AI generated fake news uses predictable combinations of words in its syntax. As a result, some software utilizes these types of searches to ferret out content suspected to be fake

The other strategy is corrective. After identifying the content, the next step could be getting it taken down or at least flagged by the host. If the host does not qualify as an “interactive computer service,” then they may be liable for the deepfake or defamation just as must as your competitor who posted it. This gives your company leverage in the negotiation. Even if the host is protected by Section 230, such as in the case of Twitter, you may still contact the person or entity who posted the content. A request for a retractive statement on the platform, such as a curing Tweet, could also prove effective.

A version of this article has been published in the PLI Chronicle. See https://plus.pli.edu/Details/Details?fq=id:(321454-ATL1)&referrer=. (C) by Mr. Harald Krichel; https://creativecommons.org/licenses/by-sa/3.0/legalcode

Digital counterfeits — sometimes protected.

Digital counterfeits — sometimes protected.

Fake news. You’ve heard about it. “Deep fakes” are cousins of fake news — they are modified videos, photos, or recordings that are made to look original — but aren’t. Both are counterfeits of the originals.

What if robots or bots programmed with artificial intelligence (“AI”) can eliminate all or most of fake news and deep fake content online? Even if AI could do this, federal law may still protect the existence of such counterfeit content on, say, Twitter. Meanwhile, federal law doesn’t protect counterfeit currencies or fashion knockoffs from, say, Ralph Lauren.

That’s the subject of Long & Associates principal Ryan E. Long’s talk for The Stanford Center for Legal Informatics (a/k/a “Code X”). It’s a multidisciplinary lab run by Stanford Law School and the Stanford Computer Science Department. To view the talk, please click HERE.

Can AI fight fake news?

Can AI fight fake news?

Picture this: Tomorrow morning you get an audio message on your cell phone from The International New York Times: “Jerusalem: 1,400,000 New Coronavirus Cases!” Within minutes, there is a city-wide panic. You then scratch your head. “Wait a minute, the last census showed there are only about 931,756 people living in Jerusalem.” Luckily, you find out later in the day that the newspaper was hacked and that the story was created by malicious artificial intelligence (AI). Think this is imaginary? Not quite. Read more in this article, entitled Robot v. Robot: Can AI Fight Fake News?, that Long & Associates attorney Ryan E .Long wrote for an Israeli innovation publication.

Facial recognition technology–good, bad, or ugly?

Facial recognition technology–good, bad, or ugly?

It’s 5:30 a.m. and you hear a loud knock on the door. It’s the FBI. They suspect that you robbed a bank in Austin, Texas on December 31, 2019. How did they know? Facial recognition technology. The thing is, you were in London, England, on the 31st and have an airplane ticket to prove it. Think this is an unrealistic hypo? Think again. Learn more in the article below, titled The Match Game, that Long & Associates attorney Ryan E. Long wrote for Cognitive Times.

Twitter has posted a “deep fake” video of you shoplifting.

Twitter has posted a “deep fake” video of you shoplifting.

What can you do about it?

An embarrassing video of you shoplifting fancy dog food from Pet Food Express surfaces on Twitter. The video is a “deep fake.” It is either a manipulated version of a real video or made up all together. You don’t even own a dog and have never been to Pet Food Express! What can you do about it? Find out more in this Digital Trends article, in which I suggest that Twitter should adopt a take down process for such videos. Please click HERE.

How to beat a copycat.

How to beat a copycat.

If you run a billion (or even million) dollar brand, does it make sense to spend a few thousand dollars to protect your mark from copycats? Or if you are a songwriter, does it make sense to spend thirty-five dollars to register a copyright on your song? The answer is yes. But the recent “SUPREME” trademark drama shows why this answer isn’t so obvious to all.

“SUPREME” is the trademark of a street wear brandname that has been operating in New York City since at least 1994. The company who originated “SUPREME,” however, never registered its trademark. This was a fatal mistake. As reported in The Wall Street Journal, Mr. Michele di Pierro started a rogue version of the “SUPREME” brand in Europe in or about 2015. Since then, Mr. di Pierro has filed dozens of trademark registrations over “SUPREME” in dozens of countries.

While the New York company has priority rights to the “SUPREME” mark in the geographic region of its use, which would certainly include New York City, a first to filer often has priority in every region in which the trademark isn’t being used. Had the senior “SUPREME” user filed its trademark in the United States and then extrapolated to other regions, it would have avoided this distasteful and costly dispute with copycat junior user Mr. di Pierro. The same lesson applies for copyright. Say Bob Dylan writes a song, say All Along the Watch Tower, but doesn’t register the copyright. Someone files a copyright first. Then the copycat is the presumptive owner of the song — not Mr. Dylan. He would then need to move to cancel the copyright.

The lesson is that a simple registration today can save you millions of dollars in legal fees tomorrow fighting a copycat.

You’ve been sued in the E.U. for using a “short extract”!

You’ve been sued in the E.U. for using a “short extract”!

You are CEO of Google. When you wake up tomorrow morning, your general counsel calls you: “we’ve been sued in the E.U. for copyright infringement! The claim: our search results for Le Parisien and dozens of other newspapers used more than one word and/or beyond a ‘short extract.'” Your response: “is this April Fools’ day?”

With the new E.U. copyright law, it will likely be easier for tech companies to get caught in their legal labyrinth.

Fortunately or unfortunately, the answer is “no.” The E.U. is in the midst of passing copyright legislation which would limit or even terminate the safe harbors currently in place for platforms like Facebook on which infringing material can be posted. The other part of the legislation basically makes it an infringement of copyrighted material for Google to have search results from news publishers like Le Parisien which result in either more than one word or “very short extracts” of news articles. 

In the U.S., the Digital Millennium Copyright Act provides a safe harbor take down procedure to companies like Facebook who have infringing materials uploaded by users. Fair use generally allows search engines like Google to produce search results that use more than one word – or a “very short extract” of a news article — when such use acts as a complimentary – and not a substitute – market for the article.

Content creators have arguably lost a great deal in revenues. Tons of infringing material is posted daily on sites like You Tube. Policing such infringing materials, especially when they can scale so quickly, is very expensive and time consuming—particularly for smaller content creators. On top of this, there is currently no bright line rule in the U.S. as to when a particular use is fair or infringement. As a result, rather than people actually going to Le Parisien’s website, many merely read an excerpt – and then stop there. The new E.U. law seeks to address both issues. The law reduces or even eliminates the safe harbor for sites like You Tube. It also provides a bright line between fair and infringing use.

In so doing, the law’s intent is to switch bargaining power back to content creators. Whether such laws eliminate free riding or unnecessarily hinder the free flow of information remains to be seen. One thing is for certain, however: this isn’t April Fools’ day for companies like Google doing business in the E.U. Regulatory and licensing costs of doing business in the E.U. will, in all likelihood, increase.

ICO — Unidentified Object or Security?

ICO — Unidentified Object or Security?

An Initial Coin Offering (“ICO”). Is it an unidentified (not flying) object or an offer of securities, like any other stock or interest you’d buy in a company?

Whether you are buying into or issuing an ICO of cryptocurrency, the question isn’t an academic one. Until recently, whether an ICO was considered an offer of “securities” within the meaning of federal law was relatively undecided. As a result, whether you were safe — or not — in trying to use various exemptions, such as private placements of securities under Regulation D, was up uncertain. That appears to have changed.

Recently, the federal government sought to indict defendant Maksim Zaslavskiy for securities fraud and conspiracy to commit securities fraud. Mr. Zaslavskiy’s argument in support of dismissing the government’s indictment was that the digital currencies involved weren’t “securities” akin to stocks or bonds. However, the United States District Court for the Eastern District of New York court found that a reasonable jury could find that his digital currencies satisfied the test of a “security” within applicable case law.

Whether this decision withstands scrutiny on appeal, if it is appealed, remains to be seen.

That being said, if you are operating in the cryptocurrency world, the best way to deal with an ICO is to comply with applicable federal (and state) exemptions to the otherwise cumbersome securities registration requirements.

In so doing, you can buy into or carry out an ICO without looking over your shoulder.

Google Privacy. Oxymoron?

Google Privacy. Oxymoron?

If you use Google, have you ever read their privacy policy? If you haven’t, please keep reading. The policy delineates under what circumstances Google can peruse your information, including your e-mails, for disclosure to third parties. While you may not think the policy will ever affect you, your e-mails and other messages on Google could be disclosed by the company under certain circumstances. What are they?

Generally, Google will not disclose your information to third parties without your consent and, in the case of sensitive personal information, without your “explicit consent.” The company’s privacy policy makes clear, however, that it may disclose your information to “affiliates and other trusted businesses or persons,” which is a pretty broad category. Google also makes another exception to “[m]eet any applicable law, regulation, legal process, or enforceable governmental request.” These categories could include, for example, a subpoena sent to Google from a plaintiff in a copyright infringement lawsuit or a court order.

Even when you receive a notice from Google that a subpoena seeks the disclosure of your information, or identity, you still have choices. The same is true of a court order. You can ask that Google “quash” the subpoena — which means that the subpoena is overly broad or seeks information irrelevant to the underlying lawsuit. In the case of a court order, it can be stayed pending an appeal. Needless to say, if you don’t care about your information being disclosed, you can do nothing. But if the subpoena seeks to unmask you so as to name you in a copyright infringement lawsuit, or otherwise, doing nothing may not be wise. A fair use other other defense may conclusively establish that the lawsuit is a sham.

Rather than using Google — or even social media outlets like Facebook — with your eyes closed, it is probably better to know what you are getting yourself into. Otherwise, you may be unpleasantly surprised one day when you find out what you thought was private isn’t.

FBI v. Apple – Round 2

FBI v. Apple – Round 2

How important is your iPhone privacy? Does it defeat law enforcement’s interest in obtaining evidence of child pornography productions from your iPhone? According to a recent New York Times article, Apple decided to plug a privacy hole in its iPhone through which law enforcement could crawl. This plug was in response to FBI’s previous end-run around iPhone software. You can read more about FBI v. Apple — Round 1 — here.

As the Times article makes clear, Indiana law enforcement officials used a $15,000.00 device from Gray Shift to unlock 96 iPhones in 2018, each time with a warrant. In Round 1, the magistrate judge essentially ordered Apple to create a back door through the iPhone’s encryption for the FBI to use. This overreaching doesn’t exist in Round 2. It doesn’t appear the Indiana warrants required Apple to create a back door. As with real property cases, law enforcement has a right to forcibly enter your property once they have a warrant.

But Apple’s plug now makes such devices likely obsolete. In so doing, Apple has made it harder for law enforcement to access your iPhone even when there is a warrant. Some district attorneys have argued, as pointed out in the article, that Apple is “blatantly protecting criminal activity.” Viewing Apple this way is black and white: either there is easy third party access and Apple is good or no third party access but then Apple is bad.

In so doing, a middle road is ignored. When law enforcement obtains a warrant to search your bitcoin that is stored in a Switzerland bunker, they will not be able to access it without help. The bunker is not linked to the internet. While the FBI could physically access the bunker, if necessary, the data on the blockchain is meaningless. All identities are protected by crypto hash signatures. One solution would be for the owner of the bunker — your bitcoin landlord — to obtain the information needed about your account and submit it to the judge for private (“in camera”) review. This solution was proposed in Round 1.

By giving evidence from an iPhone to the judiciary, Apple could proudly assist law enforcement’s prosecution of child pornography, among other things. At the same time, Apple could still keep plugging otherwise revenue loss causing privacy holes.