“My AI did it.” One could imagine a company responding this way when facing a lawsuit for, say, an artificial intelligence (“AI”) powered robot gone astray. But can this response be a legally viable defense? Find out more in this article for New Matter, a publication of the California Lawyers Association, that I wrote. Please click HERE to download it.
Author: Ryan E. Long
Mathematics of AI: Everyday Importance
What is the math behind artificial intelligence (“AI”)? From facial recognition technology to online shopping, AI uses algorithms. But what are they and how important is the data that is used? Find out more in this podcast interview I did in conjunction with the AI Accelerator Institute in London of Professor Christian Igel, of the University of Copenhagen’s Department of Computer Science:
This podcast is the second in a series entitled “AI Keyhole: Evolution, Applications & Policy.” For more information on the date and guest for the next installment about AI and fintech, please e-mail the host — Ryan — at rlong@landapllc.com or Tim Mitchell at the Institute: tim@aiacceleratorinstitute.com.
Online Echo Chambers: Stuck In One?
Digital echo chambers and filter bubbles. What are they, why should you care, and how do you know if you are in one? Artificial intelligence and search engines are more powerful forces in our digital lives. They are increasingly affecting the way how you and companies make various decisions — including mortgage interest and car loan rates. Find out more in this talk I’m doing in conjunction with analytics software company Valuenex on April 6th in Palo Alto. Please click HERE to register to attend virtually or in person.
Where did AI come from?
Artificial intelligence (“AI”) applications are growing. From facial recognition technology to shopping online, AI is being used to supplement — and at other times substitute — human decision making. Where does AI come from, how was it developed, and where is it heading?
On March 7th, in conjunction with the AI Accelerator Institute in London, AI Keyhole series was launched to address some of these issues. The series will invite various members of the AI community to speak on these topics. The first guest was Professor Michael Wooldridge of the University of Oxford’s Department of Computer Science. The subject: origination and development of AI. The podcast from that talk can be listened to via the recording below. To learn more about the date and guest for the next installment, please e-mail the host — Ryan — at: rlong@landapllc.com.
Retweeting Defamation: Your Potential Liability . . .
Wishing you a bright start to your 2022.
Defamation. You’ve heard of it. It’s generally a false statement of fact about someone — including a company — that injures their reputation. For example, North Face’s statement that Patagonia’s Gore-Tex rain shell jacket isn’t water proof — when it is — would be defamatory. North Face could get sued by Patagonia.
But did you know a North Face’s employee’s repeating of the defamation, whether via Facebook, a tweet, or even verbally, could be used as evidence of malice — intentional defamation — in a defamation suit? It could also be a separate act of defamation.
Find out more in this article I wrote for Quill. It’s published by the Society of Professional Journalists.
In the meantime, please don’t hesitate contacting me should you or your company have any intellectual property related legal questions related to the technology or media industries.
Santa Claus has sued you . . .

No. You won’t ever get sued by Santa Claus for using his image on Twitter. But Twitter did just pass a new rule: you can use images of others in your Tweets only with their permission. Please click here to learn more.
Even if you don’t use Twitter, the foregoing is still relevant to your use of other people’s images in, say, advertising or other public communications. This rule is related to the right of publicity: every person has a right not to have his or her image used without their permission. There are some contours to this rule from state to state, such as for public figures and issues of public concern. But you should be aware that posting another person’s image without their permission is not without risks.
In the meantime, I wish you a joyous Christmas and fresh new start to 2022.
Bought A Stolen NFT: Liable?
Non-fungible tokens (“NFTs”). I am sure you’ve heard of them. But what are they? And how do you protect against buying or selling NFTs that contain stolen, counterfeit, or otherwise infringing materials? Whether you are an investor in an NFT business, buy / sell NFTs, or just want to know more, this article I wrote for CompTIA will be of interest. Please click HERE to read more.
In the meantime, if you or a colleague have a breach of contract litigation or licensing issue concerning an NFT, please contact me. My office always tries to find novel solutions even to tricky litigation and licensing issues.
Artificial Intelligence Liability
Invest in artificial intelligence (“AI”)? Or does your company use it? In either case, there will likely be issues that arise concerning AI liability. Whether you are in the E.U. or U.S., this article that I wrote for the London School of Economics Business Review will be relevant for you. Please click HERE to read more.
Contracts Still Matter? Read This.
If you hire a freelancer in New York City, do your contractual terms matter under the Freelance Isn’t Free Act (“FIFA”)? Yesterday, a decision by a New York County court says they do. In so doing, the Court dismissed a FIFA-based complaint filed against client Precision Initiative Tech. Corp., an Austin-based tech placement agency.
The decision is one of the few to date interpreting FIFA. It can be read HERE. The Court found that a choice of forum clause in the parties’ contract — requiring a lawsuit to be filed in Massachusetts — barred the suit from being filed in New York.
Even if you don’t hire freelancers in New York City, the decision can still be relevant to you. Others in locales including the United Kingdom are lobbying for legislation similar to FIFA. So don’t be surprised if FIFA-like legislation comes to your city in the not-too-distant future.
In the meantime, if you or a colleague have a breach of contract litigation or licensing issue, please contact me. My office always tries to find novel solutions even to tricky litigation and licensing issues.
AI Creations: Who Owns Them?
Imagine you just purchased a painting from Sotheby’s called Portrait of Edmond Belamy (“Portrait”) for $432,500. Portrait was AI-generated. Your neighbour Jim takes a photo of the painting as you are bringing it inside. Jim puts Portrait on t-shirts for sale online.
What, if anything, can you do, provided you wanted to? What about the software company who owns the AI? Does it matter whether you live in the US or the EU?
The issue is not hypothetical. AI-created paintings, software, and other inventions have grown immensely. While copyright and patent law can protect human-made paintings and software, respectively, AI-generated inventions are not protectable under either regime in the US. In the EU, the answer is largely the same, as we shall see below.
Can AI be an “author” or “inventor”?
A. The European Union
The European Patent Office on 28 January 2020 rejected patent filings by a machine called DABUS on the same grounds that “an inventor designated in the application has to be a human being, and not a machine.” In both applications, “a machine called ‘DABUS,’ which is described as a ‘type of connectionist artificial intelligence,’ is named as the inventor.” As we shall see, decisions regarding copyright and patent ownership in the US follow a similar rationale.
Thereafter, the European Parliament passed a number of resolutions concerning AI throughout 2020. A report “on intellectual property rights for the development of artificial intelligence technologies” from 10 October 2020 is most relevant for the purposes of this article. It recommends that, in apportioning intellectual property rights, “the degree of human intervention” and “autonomy of AI” should be taken into account.
The report goes onto note “the difference between AI-assisted human creations and AI-generated creations, with the latter creating new challenges for IPR [intellectual property rights] protection, such as questions of ownership, inventorship and appropriate renumeration.” The Report recommends that “works autonomously produced by artificial agents and robots might not be eligible for copyright protection, in order to preserve the principle of originality, which is linked to a natural person.”. As such, “ownership rights, if any, should be assigned to natural or legal persons that created the work lawfully.”
On October 20, 2020, the European Parliament adopted the recommendations and refined them via a Resolution. For example, while AI and related technologies “based on computations models an algorithms” and regarded as “mathematical methods” are not patentable,” such models and computer programs may be protected “when they are used as part of an AI system that contributes to producing a further technical effect.” (Emphasis added.)
The Resolution goes on to clarify that “where AI is used only as a tool to assist an author in the process of creation, the current IP framework remains applicable.” (Emphasis added.) In all cases, only a natural person can be listed as the inventor of a copyright or patent in the EU.
That all being said, the 20 October Resolution does not state whether the author needs to delineate in the application which parts of the creation were AI made, and which were created by the author. This approach is how copyrights for compositions by multiple authors are filed in the US – denoting by whom among the authors certain lyrics or musical notes were written. A similar approach could be used for a subsequent Resolution.
B. The United States
Copyright case law has also indicated that AI cannot be an “author” under the Copyright Act. In Naurto v. Slater, the Ninth Circuit Court of Appeals held that an Indonesian monkey named “Naruto” couldn’t own the copyright to his “Monkey Selfies.” The reason: the U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” (Emphasis added.) The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” (Emphasis added.) Consequently, AI-created products are not likely subject to copyright registration.
Similarly, the USPTO, in a April 27, 2020 decision ruled that AI cannot be listed as the “inventor” in a patent application. The application was filed by the Artificial Inventor Project (“AIP”), which is a team of international patent attorneys whose mission is to explore AI patentability. AIP filed a sealed patent application on July 29, 2019, for “Devices and Methods for Attracting Enhanced Attention (“DABUS application”). According to the application, this “creativity machine” is “programmed as a series of neural networks that have been trained with general information in the field of endeavor to independently create the invention.” The inventor on the substitute application was listed as “DABUS (the invention was autonomously generated by artificial intelligence).”
Under relevant federal patent law, an “inventor” is defined as “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” However, the USPTO’s decision denying the DABUS application pointed out that federal law consistently refers to inventors as natural persons. One section provides “[w]hoever invents or discovers any new and useful process . . . may obtain a patent therefore”. According to the USPTO, “’[w]hoever’ suggests a natural person.” Other provisions of federal patent law “refers to individuals and uses pronouns specific to natural persons — ‘himself’ and ‘herself’ – when referring to the ‘individual’ who believes himself or herself to be the original inventor or an original joint inventor or a claimed invention on the application.” The finding of the USPTO is consistent with Federal Circuit case law which has held that an “inventor” must be a natural person.
Like in the E.U., a creation may still be copyrightable or patentable in the U.S. if it was made with the assistance of AI. The question of how much AI involvement renders an otherwise human-made creation a product of AI has yet to be addressed.
How to protect AI creations?
If AI creations are not, for the time being, protectable under either copyright or patent, then how can one protect them? Contractual provisions in licensing agreements are one option. Even if licensed technology isn’t either copyrightable or patentable, contract law can provide a gap filler between contracting parties. However, this doesn’t preclude reverse engineering once the product is released into the market. Another alternative is federal or state trade secret law. Even then, trade secret law doesn’t preclude reverse engineering.
In light of these open questions, the World Intellectual Property Organization (WIPO) held a conference in late 2020 to address ownership of AI created works. One question in WIPO’s Revised Issues Paper: “[i]f a human inventor is required to be named, should AI-generated inventions fall within the public domain or should the law given indications of the way in which the human inventor should be determined?” Likewise, the U.S. Copyright Office held a conference in February of 2020 year titled: “Copyright in the Age of Artificial Intelligence” and, later in the year, the USPTO published a report “Public Views on Artificial Intelligence and Intellectual Property Policy. The USPTO Report confirmed that AI cannot “invent nor author without human invention.”
Given the foregoing, you would not likely be able to enjoin Jim from commercially exploiting Portrait in either the EU or the US. As for the software company that created Portrait via its AI, the answer would, in all likelihood, be the same.
This article was originally published in Epicenter — European Policy Information Center.