No music license needed! AI scraped it.

No music license needed! AI scraped it.

Music licensing. Whether it’s a Willie Nelson sample or otherwise to use in your company’s advertisement, permission is generally needed. But does artificial intelligence (“AI”) change this? Some say AI can crawl samples and mash them — making licensing unnecessary.

That’s until your company gets sued for copyright infringement.

To find out more, an understanding of music permissions is needed. Please click here for a short presentation from the Ella Project in New Orleans. As you will hear, music licensing is tricky.

Deepfakes: Is Your Company Helpless?

Deepfakes: Is Your Company Helpless?

A competitor posts a damaging deepfake video about your company on Twitter. Coupled with the video is a defamatory statement about your CEO. What, if anything, can you do about it? That’s what this article will address. Think this is a hypothetical? Recently, MIT Tech Review covered the use of deepfake photos concerning Amazon’s workplace practices.

Section 230 of the Communications Decency Act of 1996 (“Section 230”) has been a wide-ranging piece of legislation. Before 230, online forums and other digital locations could be held liable under the common law for knowingly—or intentionally—allowing third parties to post defamatory material. Section 230 fundamentally changed the landscape. This article will provide an overview of 230, its scope, potential reforms, and practical considerations for executives at companies who are negatively affected by such content. After presenting (1) an introductory hypothetical, I go on to explain: (2) deepfakes and fake news; (3) liability of companies like Twitter after Section 230; and (4) some practical considerations on how to defend your company against deepfakes and defamation posted on social media. 

(1) Rip-Off Report Hypo

Assume you own a franchise of Acme Coffee Company (“Acme”). Acme is publicly traded. Ms. Jane Austin posts on Rip Off Report about your store: “It’s a fraud! All of its coffee is actually old Substandard Coffee Company decaf. Don’t go there.” Ms. Austin even puts up a video of your employees drinking Substandard decaf and giggling as they use the company’s vacuum-packed moldy beans—which look and taste awful! —to make the coffee for your customers. The Rip Off Report post goes viral—hashtags begin on Twitter: “#boycottacme!” Assume the Rip Off report post is untrue and that the video is a technologically manufactured fake. Nonetheless, your business suffers. Acme stock takes a tailspin. Can you get the post removed from social media? Can you sue Ms. Austin for defamation? What about federal criminal securities fraud?

 (2) Fake News / Deep Fakes

A producer of 60 Minutes defines fake news as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people.” Satire isn’t considered fake news. One example: “Planet Mars is headed straight for Earth.” An MIT Sloan article, Deepfakes, Explained, defines deep fakes as a “specific kind of synthetic media where a person in an image or video is swapped with another person’s likeliness.” A deep fake can either be a swap of an image or voice or manufactured anew. Recently, for example, there was a deep fake video of Mark Zuckerberg on Instagram saying “whoever controls the data, controls the future.”

 (3) Social Media Liability After-Section 230

Two parts of Section 230 are of utmost importance. The first, 47 U.S.C. Section 230(c), protects actions by an “interactive computer service” to “restrict access to or availability of material that the provider considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected[.]” The second, 47 U.S.C. § 230(f)(2), defines “interactive computer service” to include any “information service . . . or access software provider that . . . enables computer access by multiple users to a computer server[.]”

“Interactive computer service” has been interpreted to include not only expected forums like Twitter or Facebook but also online matchmaking. In Carafano v. Metrosplash.com, Inc., for example, a 2003 decision from a federal appeals court in California, defendant Matchmaker had a user post a fake profile and provide false content in order to harass a well-known actress. However, Matchmaker was still immune under 230 for invasion of privacy, negligence, and defamation. The phrase has also been expanded to employer provided e-mail service. In Delfino v. Aglient Technologies, Inc., a California appeals court held in 2006 that an employer is protected by section 230 even though an employee uses work related e-mail to send harassing e-mails.

Once a company is deemed to run such a service, Section 230 has been interpreted to basically preempt various state causes of action including defamation, negligence, and contract. Companies running an interactive computer service can also find succor under Section 230 even from federal claims including federal civil rights. In Noah v. AOL Time Warner, Inc., a 2003 federal district court decision from Virginia, a Muslim plaintiff’s claim against AOL for violating federal law concerning the prohibition against discrimination in places of public accommodation, in light of threatening or blasphemous postings about Muslims on AOL, was barred by Section 230.

That being said, copyright and related claims concerning trademark infringement are exempted per Section 230(e)(2). This means that even an interactive computer service can be vicariously liable for infringing copyrights or trademarks posted on sites like Twitter. In that case, regular analysis under the Copyright Act – including fair use – will apply. If Ms. Turner infringed one of your company’s trademarks in her deepfake video, or otherwise infringed a company copyright by, say, sampling a song licensed to you, then copyright infringement analysis would apply.

It is of utmost importance, under the statute, that a company like Twitter does not cross the line between merely keeping the post up or taking it down and actually modifying the message or posting. In Zeran v. America Online, a 1997 federal appeals court decision, the court held that section 230 protects AOL from liability for refusing to take down user’s post of a private phone number of an individual along with defamation. Likewise, in Batzel v. Smith, a federal appeals court in California held that a moderator of listserv was protected under § 230 so long as modifications to the e-mail by a third party retain the “basic form and message.” Thus, if Twitter takes your Tweet and inserts “Apple” instead of “IBM,” this could remove the company from the protective umbrella of Section 230 if the Tweet would then become defamatory or otherwise violate federal law.

Thus, Ms. Austin may be liable under either common law or Section 230 for her false accusations about Acme. While Twitter could be liable under common law, the company would clearly not be liable under Section 230.

 (4) Practical Considerations

In light of the foregoing, there are various strategies your company can take to protect it from malicious deepfakes or fake news posted by competitors. One is preemptive use of artificial intelligence (“AI”) based software. Some software in the marketplace like Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, this type of software prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, the software contacts the publisher or platform. The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be further countered by AI. Oftentimes, AI generated fake news uses predictable combinations of words in its syntax. As a result, some software utilizes these types of searches to ferret out content suspected to be fake

The other strategy is corrective. After identifying the content, the next step could be getting it taken down or at least flagged by the host. If the host does not qualify as an “interactive computer service,” then they may be liable for the deepfake or defamation just as must as your competitor who posted it. This gives your company leverage in the negotiation. Even if the host is protected by Section 230, such as in the case of Twitter, you may still contact the person or entity who posted the content. A request for a retractive statement on the platform, such as a curing Tweet, could also prove effective.

A version of this article has been published in the PLI Chronicle. See https://plus.pli.edu/Details/Details?fq=id:(321454-ATL1)&referrer=. (C) by Mr. Harald Krichel; https://creativecommons.org/licenses/by-sa/3.0/legalcode

Artificial intelligence Art: Who Owns It?

Artificial intelligence Art: Who Owns It?

If your pet dog Hans takes a selfie, does he own the copyright? A recent decision by the U.S. Court of Appeals for the Ninth Circuit (“Ninth Circuit”) is instructive. It says that a monkey can’t own the copyright to his selfie. The reason? Only humans can own a copyright under U.S. law. But who owns artificial intelligence (“AI”) created artwork? This entry addresses that issue.

The Ninth Circuit Decision

The Indonesian monkey at the heart of the dispute is named “Naruto.” He is actually quite handsome, as you can see if you look up his profile shot – not on Linked In, of course. The story began on the island of Sulawesi, not Fantasy Island but close. David Slater, a British wildlife photographer, left his camera unattended. Naruto then picked up the camera and, harnessing his training at the British Museum School of Art and Design, began taking stunning photos of himself.

Whilst Gentleman’s Quarterly and other magazines sought to feature him in their publications, Naruto couldn’t be bothered. His images, posted by Mr. Slater, had already gone viral. Naruto retained the services of People for the Ethical Treatment of Animals (“PETA”) to sue Mr. Slater and his publishers for copyright infringement. The Ninth Circuit dismissed the suit because Naruto can’t own the copyright to the photos.

Unfortunately, Naruto couldn’t be reached for comment.

Part of the reasoning of the court was simple. The U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” The question raised by the decision is whether computer generated art is copyrightable and, if so, whether the AI – or its programmer – would be the owner.

AI Art & Blurred Lines

The issue of AI created artwork isn’t academic. According to one recent article in Art Net News, the Paris based collector Nicolas Laugero-Lasserre acquired Le Comte de Belamy, which was created by artificial intelligence. Mr. Laurgero-Lasserre bought the work directly from Obvious, a collective that created the AI behind Le Comte de Belamy. Instead of a signature, the artwork is signed by the AI using an equation. Naruto is jealous.

As AI gets smarter and more evolved, it will be capable of not only just creating art. Think of AI like that found in War Games (1984) which can create systems of engagement resembling  warfare. Then you extrapolate such a system to business. In such a case, a company like Obvious can create AI that spawns not only art but other companies, chock full of their own versions of Suri. This AI dominated world is laid out in movies like Her (2013), in which the main actor – Joaquin Phoenix – forms an intimate relationship with an AI app played by Scarlett Johansson. With the proliferation of synthetic body parts, imagining a full functioning AI cyborg that resembles a human isn’t as far-fetched as it may have sounded in the 1950s. The lines between fair use and copyright infringement have been blurred by mash-ups that modify music samples so that their identities become unrecognizable. Similarly, there will be blurred lines between human and AI created art as the years progress. The law needs to be ready to address these issues.

But, as the character Willie Stark explained in Robert Penn Warren’s All The Kings Men, “(the law) is like a single-bed blanket on a double bed and three folks in the bed and a cold night . . . There [not] ever enough blanket to cover the case, no matter how much pulling and hauling, and somebody is always going to catch pneumonia.” Maybe the shortcomings of the law in dealing with AI issues will always be here. But such shortcomings can be mitigated by policy makers who have foresight today as to where technology is heading tomorrow.

Public Domain Versus Work-For-Hire

If Naruto doesn’t own the copyright to the photo, then it would likely be in the public domain. However, an argument could be made that any art created by other animals who reside on government owned reserves or private property would be owned by the reserve or property owner. This is how a work-for-hire works in the U.S. While the author normally is the proper copyright owner, a work-for-hire arrangement gives the employer of the author the right. A similar approach could be taken by those who provide room and board to the likes of Naruto the handsome.

The issue remains about whether AI created art is also not subject to copyright because it was “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Using the reasoning of the Ninth Circuit, the answer would be that all such works are in the public domain. But then the question becomes whether one could make copies of Comte de Belamy in the U.S.  without worrying about a copyright infringement lawsuit. While several nations, such as the U.K., grant copyrights to a person who arranges for the creation of computer generated works, the U.S. does not.

Either the U.S. takes the U.K.’s lead or these works will end up in the public domain. This overly rigid approach as to what constitutes “intervention from a human author” would result in counterintuitive outcomes for companies like Obvious. By allowing owners of AI to own the creative works spawned by their systems, U.S. law could also conceivably give copyright rights to those who own the property on which the likes of Naruto the handsome reside.