Contracts Still Matter? Read This.

Contracts Still Matter? Read This.

If you hire a freelancer in New York City, do your contractual terms matter under the Freelance Isn’t Free Act (“FIFA”)? Yesterday, a decision by a New York County court says they do. In so doing, the Court dismissed a FIFA-based complaint filed against client Precision Initiative Tech. Corp., an Austin-based tech placement agency.

The decision is one of the few to date interpreting FIFA. It can be read HERE. The Court found that a choice of forum clause in the parties’ contract — requiring a lawsuit to be filed in Massachusetts — barred the suit from being filed in New York.

Even if you don’t hire freelancers in New York City, the decision can still be relevant to you. Others in locales including the United Kingdom are lobbying for legislation similar to FIFA. So don’t be surprised if FIFA-like legislation comes to your city in the not-too-distant future.

In the meantime, if you or a colleague have a breach of contract litigation or licensing issue, please contact me. My office always tries to find novel solutions even to tricky litigation and licensing issues.

AI Creations: Who Owns Them?

AI Creations: Who Owns Them?

Imagine you just purchased a painting from Sotheby’s called Portrait of Edmond Belamy (“Portrait”) for $432,500. Portrait was AI-generated. Your neighbour Jim takes a photo of the painting as you are bringing it inside. Jim puts Portrait on t-shirts for sale online.

What, if anything, can you do, provided you wanted to? What about the software company who owns the AI? Does it matter whether you live in the US or the EU?

The issue is not hypothetical. AI-created paintings, software, and other inventions have grown immensely. While copyright and patent law can protect human-made paintings and software, respectively, AI-generated inventions are not protectable under either regime in the US. In the EU, the answer is largely the same, as we shall see below.

Can AI be an “author” or “inventor”?

A. The European Union

The European Patent Office on 28 January 2020 rejected patent filings by a machine called DABUS on the same grounds that “an inventor designated in the application has to be a human being, and not a machine.” In both applications, “a machine called ‘DABUS,’ which is described as a ‘type of connectionist artificial intelligence,’ is named as the inventor.” As we shall see, decisions regarding copyright and patent ownership in the US follow a similar rationale.

Thereafter, the European Parliament passed a number of resolutions concerning AI throughout 2020. A report “on intellectual property rights for the development of artificial intelligence technologies” from 10 October 2020 is most relevant for the purposes of this article. It recommends that, in apportioning intellectual property rights, “the degree of human intervention” and “autonomy of AI” should be taken into account.

The report goes onto note “the difference between AI-assisted human creations and AI-generated creations, with the latter creating new challenges for IPR [intellectual property rights] protection, such as questions of ownership, inventorship and appropriate renumeration.” The Report recommends that “works autonomously produced by artificial agents and robots might not be eligible for copyright protection, in order to preserve the principle of originality, which is linked to a natural person.”. As such, “ownership rights, if any, should be assigned to natural or legal persons that created the work lawfully.”

On October 20, 2020, the European Parliament adopted the recommendations and refined them via a Resolution. For example, while AI and related technologies “based on computations models an algorithms” and regarded as “mathematical methods” are not patentable,” such models and computer programs may be protected “when they are used as part of an AI system that contributes to producing a further technical effect.” (Emphasis added.)

The Resolution goes on to clarify that “where AI is used only as a tool to assist an author in the process of creation, the current IP framework remains applicable.” (Emphasis added.) In all cases, only a natural person can be listed as the inventor of a copyright or patent in the EU.

That all being said, the 20 October Resolution does not state whether the author needs to delineate in the application which parts of the creation were AI made, and which were created by the author. This approach is how copyrights for compositions by multiple authors are filed in the US – denoting by whom among the authors certain lyrics or musical notes were written. A similar approach could be used for a subsequent Resolution.

B. The United States

Copyright case law has also indicated that AI cannot be an “author” under the Copyright Act. In Naurto v. Slater, the Ninth Circuit Court of Appeals held that an Indonesian monkey named “Naruto” couldn’t own the copyright to his “Monkey Selfies.” The reason: the U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” (Emphasis added.) The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” (Emphasis added.) Consequently, AI-created products are not likely subject to copyright registration.

Similarly, the USPTO, in a April 27, 2020 decision ruled that AI cannot be listed as the “inventor” in a patent application. The application was filed by the Artificial Inventor Project (“AIP”), which is a team of international patent attorneys whose mission is to explore AI patentability. AIP filed a sealed patent application on July 29, 2019, for “Devices and Methods for Attracting Enhanced Attention (“DABUS application”). According to the application, this “creativity machine” is “programmed as a series of neural networks that have been trained with general information in the field of endeavor to independently create the invention.” The inventor on the substitute application was listed as “DABUS (the invention was autonomously generated by artificial intelligence).”

Under relevant federal patent law, an “inventor” is defined as “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” However, the USPTO’s decision denying the DABUS application pointed out that federal law consistently refers to inventors as natural persons. One section provides “[w]hoever invents or discovers any new and useful process . . . may obtain a patent therefore”. According to the USPTO, “’[w]hoever’ suggests a natural person.” Other provisions of federal patent law “refers to individuals and uses pronouns specific to natural persons  — ‘himself’ and ‘herself’ – when referring to the ‘individual’ who believes himself or herself to be the original inventor or an original joint inventor or a claimed invention on the application.” The finding of the USPTO is consistent with Federal Circuit case law which has held that an “inventor” must be a natural person.

Like in the E.U., a creation may still be copyrightable or patentable in the U.S. if it was made with the assistance of AI. The question of how much AI involvement renders an otherwise human-made creation a product of AI has yet to be addressed.

How to protect AI creations?

If AI creations are not, for the time being, protectable under either copyright or patent, then how can one protect them? Contractual provisions in licensing agreements are one option. Even if licensed technology isn’t either copyrightable or patentable, contract law can provide a gap filler between contracting parties. However, this doesn’t preclude reverse engineering once the product is released into the market. Another alternative is federal or state trade secret law. Even then, trade secret law doesn’t preclude reverse engineering.

In light of these open questions, the World Intellectual Property Organization (WIPO) held a conference in late 2020 to address ownership of AI created works. One question in WIPO’s Revised Issues Paper: “[i]f a human inventor is required to be named, should AI-generated inventions fall within the public domain or should the law given indications of the way in which the human inventor should be determined?” Likewise, the U.S. Copyright Office held a conference in February of 2020 year titled: “Copyright in the Age of Artificial Intelligence” and, later in the year, the USPTO published a report “Public Views on Artificial Intelligence and Intellectual Property Policy. The USPTO Report confirmed that AI cannot “invent nor author without human invention.”

Given the foregoing, you would not likely be able to enjoin Jim from commercially exploiting Portrait in either the EU or the US. As for the software company that created Portrait via its AI, the answer would, in all likelihood, be the same.

This article was originally published in Epicenter — European Policy Information Center.

Deepfakes: Is Your Company Helpless?

Deepfakes: Is Your Company Helpless?

A competitor posts a damaging deepfake video about your company on Twitter. Coupled with the video is a defamatory statement about your CEO. What, if anything, can you do about it? That’s what this article will address. Think this is a hypothetical? Recently, MIT Tech Review covered the use of deepfake photos concerning Amazon’s workplace practices.

Section 230 of the Communications Decency Act of 1996 (“Section 230”) has been a wide-ranging piece of legislation. Before 230, online forums and other digital locations could be held liable under the common law for knowingly—or intentionally—allowing third parties to post defamatory material. Section 230 fundamentally changed the landscape. This article will provide an overview of 230, its scope, potential reforms, and practical considerations for executives at companies who are negatively affected by such content. After presenting (1) an introductory hypothetical, I go on to explain: (2) deepfakes and fake news; (3) liability of companies like Twitter after Section 230; and (4) some practical considerations on how to defend your company against deepfakes and defamation posted on social media. 

(1) Rip-Off Report Hypo

Assume you own a franchise of Acme Coffee Company (“Acme”). Acme is publicly traded. Ms. Jane Austin posts on Rip Off Report about your store: “It’s a fraud! All of its coffee is actually old Substandard Coffee Company decaf. Don’t go there.” Ms. Austin even puts up a video of your employees drinking Substandard decaf and giggling as they use the company’s vacuum-packed moldy beans—which look and taste awful! —to make the coffee for your customers. The Rip Off Report post goes viral—hashtags begin on Twitter: “#boycottacme!” Assume the Rip Off report post is untrue and that the video is a technologically manufactured fake. Nonetheless, your business suffers. Acme stock takes a tailspin. Can you get the post removed from social media? Can you sue Ms. Austin for defamation? What about federal criminal securities fraud?

 (2) Fake News / Deep Fakes

A producer of 60 Minutes defines fake news as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people.” Satire isn’t considered fake news. One example: “Planet Mars is headed straight for Earth.” An MIT Sloan article, Deepfakes, Explained, defines deep fakes as a “specific kind of synthetic media where a person in an image or video is swapped with another person’s likeliness.” A deep fake can either be a swap of an image or voice or manufactured anew. Recently, for example, there was a deep fake video of Mark Zuckerberg on Instagram saying “whoever controls the data, controls the future.”

 (3) Social Media Liability After-Section 230

Two parts of Section 230 are of utmost importance. The first, 47 U.S.C. Section 230(c), protects actions by an “interactive computer service” to “restrict access to or availability of material that the provider considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected[.]” The second, 47 U.S.C. § 230(f)(2), defines “interactive computer service” to include any “information service . . . or access software provider that . . . enables computer access by multiple users to a computer server[.]”

“Interactive computer service” has been interpreted to include not only expected forums like Twitter or Facebook but also online matchmaking. In Carafano v., Inc., for example, a 2003 decision from a federal appeals court in California, defendant Matchmaker had a user post a fake profile and provide false content in order to harass a well-known actress. However, Matchmaker was still immune under 230 for invasion of privacy, negligence, and defamation. The phrase has also been expanded to employer provided e-mail service. In Delfino v. Aglient Technologies, Inc., a California appeals court held in 2006 that an employer is protected by section 230 even though an employee uses work related e-mail to send harassing e-mails.

Once a company is deemed to run such a service, Section 230 has been interpreted to basically preempt various state causes of action including defamation, negligence, and contract. Companies running an interactive computer service can also find succor under Section 230 even from federal claims including federal civil rights. In Noah v. AOL Time Warner, Inc., a 2003 federal district court decision from Virginia, a Muslim plaintiff’s claim against AOL for violating federal law concerning the prohibition against discrimination in places of public accommodation, in light of threatening or blasphemous postings about Muslims on AOL, was barred by Section 230.

That being said, copyright and related claims concerning trademark infringement are exempted per Section 230(e)(2). This means that even an interactive computer service can be vicariously liable for infringing copyrights or trademarks posted on sites like Twitter. In that case, regular analysis under the Copyright Act – including fair use – will apply. If Ms. Turner infringed one of your company’s trademarks in her deepfake video, or otherwise infringed a company copyright by, say, sampling a song licensed to you, then copyright infringement analysis would apply.

It is of utmost importance, under the statute, that a company like Twitter does not cross the line between merely keeping the post up or taking it down and actually modifying the message or posting. In Zeran v. America Online, a 1997 federal appeals court decision, the court held that section 230 protects AOL from liability for refusing to take down user’s post of a private phone number of an individual along with defamation. Likewise, in Batzel v. Smith, a federal appeals court in California held that a moderator of listserv was protected under § 230 so long as modifications to the e-mail by a third party retain the “basic form and message.” Thus, if Twitter takes your Tweet and inserts “Apple” instead of “IBM,” this could remove the company from the protective umbrella of Section 230 if the Tweet would then become defamatory or otherwise violate federal law.

Thus, Ms. Austin may be liable under either common law or Section 230 for her false accusations about Acme. While Twitter could be liable under common law, the company would clearly not be liable under Section 230.

 (4) Practical Considerations

In light of the foregoing, there are various strategies your company can take to protect it from malicious deepfakes or fake news posted by competitors. One is preemptive use of artificial intelligence (“AI”) based software. Some software in the marketplace like Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, this type of software prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, the software contacts the publisher or platform. The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be further countered by AI. Oftentimes, AI generated fake news uses predictable combinations of words in its syntax. As a result, some software utilizes these types of searches to ferret out content suspected to be fake

The other strategy is corrective. After identifying the content, the next step could be getting it taken down or at least flagged by the host. If the host does not qualify as an “interactive computer service,” then they may be liable for the deepfake or defamation just as must as your competitor who posted it. This gives your company leverage in the negotiation. Even if the host is protected by Section 230, such as in the case of Twitter, you may still contact the person or entity who posted the content. A request for a retractive statement on the platform, such as a curing Tweet, could also prove effective.

A version of this article has been published in the PLI Chronicle. See (C) by Mr. Harald Krichel;

Digital counterfeits — sometimes protected.

Digital counterfeits — sometimes protected.

Fake news. You’ve heard about it. “Deep fakes” are cousins of fake news — they are modified videos, photos, or recordings that are made to look original — but aren’t. Both are counterfeits of the originals.

What if robots or bots programmed with artificial intelligence (“AI”) can eliminate all or most of fake news and deep fake content online? Even if AI could do this, federal law may still protect the existence of such counterfeit content on, say, Twitter. Meanwhile, federal law doesn’t protect counterfeit currencies or fashion knockoffs from, say, Ralph Lauren.

That’s the subject of Long & Associates principal Ryan E. Long’s talk for The Stanford Center for Legal Informatics (a/k/a “Code X”). It’s a multidisciplinary lab run by Stanford Law School and the Stanford Computer Science Department. To view the talk, please click HERE.

Can AI fight fake news?

Can AI fight fake news?

Picture this: Tomorrow morning you get an audio message on your cell phone from The International New York Times: “Jerusalem: 1,400,000 New Coronavirus Cases!” Within minutes, there is a city-wide panic. You then scratch your head. “Wait a minute, the last census showed there are only about 931,756 people living in Jerusalem.” Luckily, you find out later in the day that the newspaper was hacked and that the story was created by malicious artificial intelligence (AI). Think this is imaginary? Not quite. Read more in this article, entitled Robot v. Robot: Can AI Fight Fake News?, that Long & Associates attorney Ryan E .Long wrote for an Israeli innovation publication.

Facial recognition technology–good, bad, or ugly?

Facial recognition technology–good, bad, or ugly?

It’s 5:30 a.m. and you hear a loud knock on the door. It’s the FBI. They suspect that you robbed a bank in Austin, Texas on December 31, 2019. How did they know? Facial recognition technology. The thing is, you were in London, England, on the 31st and have an airplane ticket to prove it. Think this is an unrealistic hypo? Think again. Learn more in the article below, titled The Match Game, that Long & Associates attorney Ryan E. Long wrote for Cognitive Times.

Twitter has posted a “deep fake” video of you shoplifting.

Twitter has posted a “deep fake” video of you shoplifting.

What can you do about it?

An embarrassing video of you shoplifting fancy dog food from Pet Food Express surfaces on Twitter. The video is a “deep fake.” It is either a manipulated version of a real video or made up all together. You don’t even own a dog and have never been to Pet Food Express! What can you do about it? Find out more in this Digital Trends article, in which I suggest that Twitter should adopt a take down process for such videos. Please click HERE.

How to beat a copycat.

How to beat a copycat.

If you run a billion (or even million) dollar brand, does it make sense to spend a few thousand dollars to protect your mark from copycats? Or if you are a songwriter, does it make sense to spend thirty-five dollars to register a copyright on your song? The answer is yes. But the recent “SUPREME” trademark drama shows why this answer isn’t so obvious to all.

“SUPREME” is the trademark of a street wear brandname that has been operating in New York City since at least 1994. The company who originated “SUPREME,” however, never registered its trademark. This was a fatal mistake. As reported in The Wall Street Journal, Mr. Michele di Pierro started a rogue version of the “SUPREME” brand in Europe in or about 2015. Since then, Mr. di Pierro has filed dozens of trademark registrations over “SUPREME” in dozens of countries.

While the New York company has priority rights to the “SUPREME” mark in the geographic region of its use, which would certainly include New York City, a first to filer often has priority in every region in which the trademark isn’t being used. Had the senior “SUPREME” user filed its trademark in the United States and then extrapolated to other regions, it would have avoided this distasteful and costly dispute with copycat junior user Mr. di Pierro. The same lesson applies for copyright. Say Bob Dylan writes a song, say All Along the Watch Tower, but doesn’t register the copyright. Someone files a copyright first. Then the copycat is the presumptive owner of the song — not Mr. Dylan. He would then need to move to cancel the copyright.

The lesson is that a simple registration today can save you millions of dollars in legal fees tomorrow fighting a copycat.

You’ve been sued in the E.U. for using a “short extract”!

You’ve been sued in the E.U. for using a “short extract”!

You are CEO of Google. When you wake up tomorrow morning, your general counsel calls you: “we’ve been sued in the E.U. for copyright infringement! The claim: our search results for Le Parisien and dozens of other newspapers used more than one word and/or beyond a ‘short extract.'” Your response: “is this April Fools’ day?”

With the new E.U. copyright law, it will likely be easier for tech companies to get caught in their legal labyrinth.

Fortunately or unfortunately, the answer is “no.” The E.U. is in the midst of passing copyright legislation which would limit or even terminate the safe harbors currently in place for platforms like Facebook on which infringing material can be posted. The other part of the legislation basically makes it an infringement of copyrighted material for Google to have search results from news publishers like Le Parisien which result in either more than one word or “very short extracts” of news articles. 

In the U.S., the Digital Millennium Copyright Act provides a safe harbor take down procedure to companies like Facebook who have infringing materials uploaded by users. Fair use generally allows search engines like Google to produce search results that use more than one word – or a “very short extract” of a news article — when such use acts as a complimentary – and not a substitute – market for the article.

Content creators have arguably lost a great deal in revenues. Tons of infringing material is posted daily on sites like You Tube. Policing such infringing materials, especially when they can scale so quickly, is very expensive and time consuming—particularly for smaller content creators. On top of this, there is currently no bright line rule in the U.S. as to when a particular use is fair or infringement. As a result, rather than people actually going to Le Parisien’s website, many merely read an excerpt – and then stop there. The new E.U. law seeks to address both issues. The law reduces or even eliminates the safe harbor for sites like You Tube. It also provides a bright line between fair and infringing use.

In so doing, the law’s intent is to switch bargaining power back to content creators. Whether such laws eliminate free riding or unnecessarily hinder the free flow of information remains to be seen. One thing is for certain, however: this isn’t April Fools’ day for companies like Google doing business in the E.U. Regulatory and licensing costs of doing business in the E.U. will, in all likelihood, increase.

ICO — Unidentified Object or Security?

ICO — Unidentified Object or Security?

An Initial Coin Offering (“ICO”). Is it an unidentified (not flying) object or an offer of securities, like any other stock or interest you’d buy in a company?

Whether you are buying into or issuing an ICO of cryptocurrency, the question isn’t an academic one. Until recently, whether an ICO was considered an offer of “securities” within the meaning of federal law was relatively undecided. As a result, whether you were safe — or not — in trying to use various exemptions, such as private placements of securities under Regulation D, was up uncertain. That appears to have changed.

Recently, the federal government sought to indict defendant Maksim Zaslavskiy for securities fraud and conspiracy to commit securities fraud. Mr. Zaslavskiy’s argument in support of dismissing the government’s indictment was that the digital currencies involved weren’t “securities” akin to stocks or bonds. However, the United States District Court for the Eastern District of New York court found that a reasonable jury could find that his digital currencies satisfied the test of a “security” within applicable case law.

Whether this decision withstands scrutiny on appeal, if it is appealed, remains to be seen.

That being said, if you are operating in the cryptocurrency world, the best way to deal with an ICO is to comply with applicable federal (and state) exemptions to the otherwise cumbersome securities registration requirements.

In so doing, you can buy into or carry out an ICO without looking over your shoulder.