Where did AI come from?

Where did AI come from?

Artificial intelligence (“AI”) applications are growing. From facial recognition technology to shopping online, AI is being used to supplement — and at other times substitute — human decision making. Where does AI come from, how was it developed, and where is it heading?

On March 7th, in conjunction with the AI Accelerator Institute in London, AI Keyhole series was launched to address some of these issues. The series will invite various members of the AI community to speak on these topics. The first guest was Professor Michael Wooldridge of the University of Oxford’s Department of Computer Science. The subject: origination and development of AI. The podcast from that talk can be listened to via the recording below. To learn more about the date and guest for the next installment, please e-mail the host — Ryan — at: rlong@landapllc.com.

Podcast Interview with Professor Michael Wooldridge of Oxford University’s Computer Science Department
Retweeting Defamation: Your Potential Liability . . .

Retweeting Defamation: Your Potential Liability . . .

Wishing you a bright start to your 2022.

Defamation. You’ve heard of it. It’s generally a false statement of fact about someone — including a company — that injures their reputation. For example, North Face’s statement that Patagonia’s Gore-Tex rain shell jacket isn’t water proof — when it is — would be defamatory. North Face could get sued by Patagonia.

But did you know a North Face’s employee’s repeating of the defamation, whether via Facebook, a tweet, or even verbally, could be used as evidence of malice — intentional defamation — in a defamation suit? It could also be a separate act of defamation.

Find out more in this article I wrote for Quill. It’s published by the Society of Professional Journalists.

In the meantime, please don’t hesitate contacting me should you or your company have any intellectual property related legal questions related to the technology or media industries.

Santa Claus has sued you . . .

Santa Claus has sued you . . .

No. You won’t ever get sued by Santa Claus for using his image on Twitter. But Twitter did just pass a new rule: you can use images of others in your Tweets only with their permission. Please click here to learn more.

Even if you don’t use Twitter, the foregoing is still relevant to your use of other people’s images in, say, advertising or other public communications. This rule is related to the right of publicity: every person has a right not to have his or her image used without their permission. There are some contours to this rule from state to state, such as for public figures and issues of public concern. But you should be aware that posting another person’s image without their permission is not without risks.

In the meantime, I wish you a joyous Christmas and fresh new start to 2022.

Bought A Stolen NFT: Liable?

Bought A Stolen NFT: Liable?

Non-fungible tokens (“NFTs”). I am sure you’ve heard of them. But what are they? And how do you protect against buying or selling NFTs that contain stolen, counterfeit, or otherwise infringing materials? Whether you are an investor in an NFT business, buy / sell NFTs, or just want to know more, this article I wrote for CompTIA will be of interest. Please click HERE to read more.

In the meantime, if you or a colleague have a breach of contract litigation or licensing issue concerning an NFT, please contact me. My office always tries to find novel solutions even to tricky litigation and licensing issues.

Artificial Intelligence Liability

Artificial Intelligence Liability

Invest in artificial intelligence (“AI”)? Or does your company use it? In either case, there will likely be issues that arise concerning AI liability. Whether you are in the E.U. or U.S., this article that I wrote for the London School of Economics Business Review will be relevant for you. Please click HERE to read more.

Contracts Still Matter? Read This.

Contracts Still Matter? Read This.

If you hire a freelancer in New York City, do your contractual terms matter under the Freelance Isn’t Free Act (“FIFA”)? Yesterday, a decision by a New York County court says they do. In so doing, the Court dismissed a FIFA-based complaint filed against client Precision Initiative Tech. Corp., an Austin-based tech placement agency.

The decision is one of the few to date interpreting FIFA. It can be read HERE. The Court found that a choice of forum clause in the parties’ contract — requiring a lawsuit to be filed in Massachusetts — barred the suit from being filed in New York.

Even if you don’t hire freelancers in New York City, the decision can still be relevant to you. Others in locales including the United Kingdom are lobbying for legislation similar to FIFA. So don’t be surprised if FIFA-like legislation comes to your city in the not-too-distant future.

In the meantime, if you or a colleague have a breach of contract litigation or licensing issue, please contact me. My office always tries to find novel solutions even to tricky litigation and licensing issues.

AI Creations: Who Owns Them?

AI Creations: Who Owns Them?

Imagine you just purchased a painting from Sotheby’s called Portrait of Edmond Belamy (“Portrait”) for $432,500. Portrait was AI-generated. Your neighbour Jim takes a photo of the painting as you are bringing it inside. Jim puts Portrait on t-shirts for sale online.

What, if anything, can you do, provided you wanted to? What about the software company who owns the AI? Does it matter whether you live in the US or the EU?

The issue is not hypothetical. AI-created paintings, software, and other inventions have grown immensely. While copyright and patent law can protect human-made paintings and software, respectively, AI-generated inventions are not protectable under either regime in the US. In the EU, the answer is largely the same, as we shall see below.

Can AI be an “author” or “inventor”?

A. The European Union

The European Patent Office on 28 January 2020 rejected patent filings by a machine called DABUS on the same grounds that “an inventor designated in the application has to be a human being, and not a machine.” In both applications, “a machine called ‘DABUS,’ which is described as a ‘type of connectionist artificial intelligence,’ is named as the inventor.” As we shall see, decisions regarding copyright and patent ownership in the US follow a similar rationale.

Thereafter, the European Parliament passed a number of resolutions concerning AI throughout 2020. A report “on intellectual property rights for the development of artificial intelligence technologies” from 10 October 2020 is most relevant for the purposes of this article. It recommends that, in apportioning intellectual property rights, “the degree of human intervention” and “autonomy of AI” should be taken into account.

The report goes onto note “the difference between AI-assisted human creations and AI-generated creations, with the latter creating new challenges for IPR [intellectual property rights] protection, such as questions of ownership, inventorship and appropriate renumeration.” The Report recommends that “works autonomously produced by artificial agents and robots might not be eligible for copyright protection, in order to preserve the principle of originality, which is linked to a natural person.”. As such, “ownership rights, if any, should be assigned to natural or legal persons that created the work lawfully.”

On October 20, 2020, the European Parliament adopted the recommendations and refined them via a Resolution. For example, while AI and related technologies “based on computations models an algorithms” and regarded as “mathematical methods” are not patentable,” such models and computer programs may be protected “when they are used as part of an AI system that contributes to producing a further technical effect.” (Emphasis added.)

The Resolution goes on to clarify that “where AI is used only as a tool to assist an author in the process of creation, the current IP framework remains applicable.” (Emphasis added.) In all cases, only a natural person can be listed as the inventor of a copyright or patent in the EU.

That all being said, the 20 October Resolution does not state whether the author needs to delineate in the application which parts of the creation were AI made, and which were created by the author. This approach is how copyrights for compositions by multiple authors are filed in the US – denoting by whom among the authors certain lyrics or musical notes were written. A similar approach could be used for a subsequent Resolution.

B. The United States

Copyright case law has also indicated that AI cannot be an “author” under the Copyright Act. In Naurto v. Slater, the Ninth Circuit Court of Appeals held that an Indonesian monkey named “Naruto” couldn’t own the copyright to his “Monkey Selfies.” The reason: the U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” (Emphasis added.) The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” (Emphasis added.) Consequently, AI-created products are not likely subject to copyright registration.

Similarly, the USPTO, in a April 27, 2020 decision ruled that AI cannot be listed as the “inventor” in a patent application. The application was filed by the Artificial Inventor Project (“AIP”), which is a team of international patent attorneys whose mission is to explore AI patentability. AIP filed a sealed patent application on July 29, 2019, for “Devices and Methods for Attracting Enhanced Attention (“DABUS application”). According to the application, this “creativity machine” is “programmed as a series of neural networks that have been trained with general information in the field of endeavor to independently create the invention.” The inventor on the substitute application was listed as “DABUS (the invention was autonomously generated by artificial intelligence).”

Under relevant federal patent law, an “inventor” is defined as “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” However, the USPTO’s decision denying the DABUS application pointed out that federal law consistently refers to inventors as natural persons. One section provides “[w]hoever invents or discovers any new and useful process . . . may obtain a patent therefore”. According to the USPTO, “’[w]hoever’ suggests a natural person.” Other provisions of federal patent law “refers to individuals and uses pronouns specific to natural persons  — ‘himself’ and ‘herself’ – when referring to the ‘individual’ who believes himself or herself to be the original inventor or an original joint inventor or a claimed invention on the application.” The finding of the USPTO is consistent with Federal Circuit case law which has held that an “inventor” must be a natural person.

Like in the E.U., a creation may still be copyrightable or patentable in the U.S. if it was made with the assistance of AI. The question of how much AI involvement renders an otherwise human-made creation a product of AI has yet to be addressed.

How to protect AI creations?

If AI creations are not, for the time being, protectable under either copyright or patent, then how can one protect them? Contractual provisions in licensing agreements are one option. Even if licensed technology isn’t either copyrightable or patentable, contract law can provide a gap filler between contracting parties. However, this doesn’t preclude reverse engineering once the product is released into the market. Another alternative is federal or state trade secret law. Even then, trade secret law doesn’t preclude reverse engineering.

In light of these open questions, the World Intellectual Property Organization (WIPO) held a conference in late 2020 to address ownership of AI created works. One question in WIPO’s Revised Issues Paper: “[i]f a human inventor is required to be named, should AI-generated inventions fall within the public domain or should the law given indications of the way in which the human inventor should be determined?” Likewise, the U.S. Copyright Office held a conference in February of 2020 year titled: “Copyright in the Age of Artificial Intelligence” and, later in the year, the USPTO published a report “Public Views on Artificial Intelligence and Intellectual Property Policy. The USPTO Report confirmed that AI cannot “invent nor author without human invention.”

Given the foregoing, you would not likely be able to enjoin Jim from commercially exploiting Portrait in either the EU or the US. As for the software company that created Portrait via its AI, the answer would, in all likelihood, be the same.

This article was originally published in Epicenter — European Policy Information Center.

Deepfakes: Is Your Company Helpless?

Deepfakes: Is Your Company Helpless?

A competitor posts a damaging deepfake video about your company on Twitter. Coupled with the video is a defamatory statement about your CEO. What, if anything, can you do about it? That’s what this article will address. Think this is a hypothetical? Recently, MIT Tech Review covered the use of deepfake photos concerning Amazon’s workplace practices.

Section 230 of the Communications Decency Act of 1996 (“Section 230”) has been a wide-ranging piece of legislation. Before 230, online forums and other digital locations could be held liable under the common law for knowingly—or intentionally—allowing third parties to post defamatory material. Section 230 fundamentally changed the landscape. This article will provide an overview of 230, its scope, potential reforms, and practical considerations for executives at companies who are negatively affected by such content. After presenting (1) an introductory hypothetical, I go on to explain: (2) deepfakes and fake news; (3) liability of companies like Twitter after Section 230; and (4) some practical considerations on how to defend your company against deepfakes and defamation posted on social media. 

(1) Rip-Off Report Hypo

Assume you own a franchise of Acme Coffee Company (“Acme”). Acme is publicly traded. Ms. Jane Austin posts on Rip Off Report about your store: “It’s a fraud! All of its coffee is actually old Substandard Coffee Company decaf. Don’t go there.” Ms. Austin even puts up a video of your employees drinking Substandard decaf and giggling as they use the company’s vacuum-packed moldy beans—which look and taste awful! —to make the coffee for your customers. The Rip Off Report post goes viral—hashtags begin on Twitter: “#boycottacme!” Assume the Rip Off report post is untrue and that the video is a technologically manufactured fake. Nonetheless, your business suffers. Acme stock takes a tailspin. Can you get the post removed from social media? Can you sue Ms. Austin for defamation? What about federal criminal securities fraud?

 (2) Fake News / Deep Fakes

A producer of 60 Minutes defines fake news as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people.” Satire isn’t considered fake news. One example: “Planet Mars is headed straight for Earth.” An MIT Sloan article, Deepfakes, Explained, defines deep fakes as a “specific kind of synthetic media where a person in an image or video is swapped with another person’s likeliness.” A deep fake can either be a swap of an image or voice or manufactured anew. Recently, for example, there was a deep fake video of Mark Zuckerberg on Instagram saying “whoever controls the data, controls the future.”

 (3) Social Media Liability After-Section 230

Two parts of Section 230 are of utmost importance. The first, 47 U.S.C. Section 230(c), protects actions by an “interactive computer service” to “restrict access to or availability of material that the provider considers to be obscene . . . or otherwise objectionable, whether or not such material is constitutionally protected[.]” The second, 47 U.S.C. § 230(f)(2), defines “interactive computer service” to include any “information service . . . or access software provider that . . . enables computer access by multiple users to a computer server[.]”

“Interactive computer service” has been interpreted to include not only expected forums like Twitter or Facebook but also online matchmaking. In Carafano v. Metrosplash.com, Inc., for example, a 2003 decision from a federal appeals court in California, defendant Matchmaker had a user post a fake profile and provide false content in order to harass a well-known actress. However, Matchmaker was still immune under 230 for invasion of privacy, negligence, and defamation. The phrase has also been expanded to employer provided e-mail service. In Delfino v. Aglient Technologies, Inc., a California appeals court held in 2006 that an employer is protected by section 230 even though an employee uses work related e-mail to send harassing e-mails.

Once a company is deemed to run such a service, Section 230 has been interpreted to basically preempt various state causes of action including defamation, negligence, and contract. Companies running an interactive computer service can also find succor under Section 230 even from federal claims including federal civil rights. In Noah v. AOL Time Warner, Inc., a 2003 federal district court decision from Virginia, a Muslim plaintiff’s claim against AOL for violating federal law concerning the prohibition against discrimination in places of public accommodation, in light of threatening or blasphemous postings about Muslims on AOL, was barred by Section 230.

That being said, copyright and related claims concerning trademark infringement are exempted per Section 230(e)(2). This means that even an interactive computer service can be vicariously liable for infringing copyrights or trademarks posted on sites like Twitter. In that case, regular analysis under the Copyright Act – including fair use – will apply. If Ms. Turner infringed one of your company’s trademarks in her deepfake video, or otherwise infringed a company copyright by, say, sampling a song licensed to you, then copyright infringement analysis would apply.

It is of utmost importance, under the statute, that a company like Twitter does not cross the line between merely keeping the post up or taking it down and actually modifying the message or posting. In Zeran v. America Online, a 1997 federal appeals court decision, the court held that section 230 protects AOL from liability for refusing to take down user’s post of a private phone number of an individual along with defamation. Likewise, in Batzel v. Smith, a federal appeals court in California held that a moderator of listserv was protected under § 230 so long as modifications to the e-mail by a third party retain the “basic form and message.” Thus, if Twitter takes your Tweet and inserts “Apple” instead of “IBM,” this could remove the company from the protective umbrella of Section 230 if the Tweet would then become defamatory or otherwise violate federal law.

Thus, Ms. Austin may be liable under either common law or Section 230 for her false accusations about Acme. While Twitter could be liable under common law, the company would clearly not be liable under Section 230.

 (4) Practical Considerations

In light of the foregoing, there are various strategies your company can take to protect it from malicious deepfakes or fake news posted by competitors. One is preemptive use of artificial intelligence (“AI”) based software. Some software in the marketplace like Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, this type of software prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, the software contacts the publisher or platform. The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be further countered by AI. Oftentimes, AI generated fake news uses predictable combinations of words in its syntax. As a result, some software utilizes these types of searches to ferret out content suspected to be fake

The other strategy is corrective. After identifying the content, the next step could be getting it taken down or at least flagged by the host. If the host does not qualify as an “interactive computer service,” then they may be liable for the deepfake or defamation just as must as your competitor who posted it. This gives your company leverage in the negotiation. Even if the host is protected by Section 230, such as in the case of Twitter, you may still contact the person or entity who posted the content. A request for a retractive statement on the platform, such as a curing Tweet, could also prove effective.

A version of this article has been published in the PLI Chronicle. See https://plus.pli.edu/Details/Details?fq=id:(321454-ATL1)&referrer=. (C) by Mr. Harald Krichel; https://creativecommons.org/licenses/by-sa/3.0/legalcode

Digital counterfeits — sometimes protected.

Digital counterfeits — sometimes protected.

Fake news. You’ve heard about it. “Deep fakes” are cousins of fake news — they are modified videos, photos, or recordings that are made to look original — but aren’t. Both are counterfeits of the originals.

What if robots or bots programmed with artificial intelligence (“AI”) can eliminate all or most of fake news and deep fake content online? Even if AI could do this, federal law may still protect the existence of such counterfeit content on, say, Twitter. Meanwhile, federal law doesn’t protect counterfeit currencies or fashion knockoffs from, say, Ralph Lauren.

That’s the subject of Long & Associates principal Ryan E. Long’s talk for The Stanford Center for Legal Informatics (a/k/a “Code X”). It’s a multidisciplinary lab run by Stanford Law School and the Stanford Computer Science Department. To view the talk, please click HERE.

Can AI fight fake news?

Can AI fight fake news?

Picture this: Tomorrow morning you get an audio message on your cell phone from The International New York Times: “Jerusalem: 1,400,000 New Coronavirus Cases!” Within minutes, there is a city-wide panic. You then scratch your head. “Wait a minute, the last census showed there are only about 931,756 people living in Jerusalem.” Luckily, you find out later in the day that the newspaper was hacked and that the story was created by malicious artificial intelligence (AI). Think this is imaginary? Not quite. Read more in this article, entitled Robot v. Robot: Can AI Fight Fake News?, that Long & Associates attorney Ryan E .Long wrote for an Israeli innovation publication.