FBI v. Apple – Round 2

FBI v. Apple – Round 2

How important is your iPhone privacy? Does it defeat law enforcement’s interest in obtaining evidence of child pornography productions from your iPhone? According to a recent New York Times article, Apple decided to plug a privacy hole in its iPhone through which law enforcement could crawl. This plug was in response to FBI’s previous end-run around iPhone software. You can read more about FBI v. Apple — Round 1 — here.

As the Times article makes clear, Indiana law enforcement officials used a $15,000.00 device from Gray Shift to unlock 96 iPhones in 2018, each time with a warrant. In Round 1, the magistrate judge essentially ordered Apple to create a back door through the iPhone’s encryption for the FBI to use. This overreaching doesn’t exist in Round 2. It doesn’t appear the Indiana warrants required Apple to create a back door. As with real property cases, law enforcement has a right to forcibly enter your property once they have a warrant.

But Apple’s plug now makes such devices likely obsolete. In so doing, Apple has made it harder for law enforcement to access your iPhone even when there is a warrant. Some district attorneys have argued, as pointed out in the article, that Apple is “blatantly protecting criminal activity.” Viewing Apple this way is black and white: either there is easy third party access and Apple is good or no third party access but then Apple is bad.

In so doing, a middle road is ignored. When law enforcement obtains a warrant to search your bitcoin that is stored in a Switzerland bunker, they will not be able to access it without help. The bunker is not linked to the internet. While the FBI could physically access the bunker, if necessary, the data on the blockchain is meaningless. All identities are protected by crypto hash signatures. One solution would be for the owner of the bunker — your bitcoin landlord — to obtain the information needed about your account and submit it to the judge for private (“in camera”) review. This solution was proposed in Round 1.

By giving evidence from an iPhone to the judiciary, Apple could proudly assist law enforcement’s prosecution of child pornography, among other things. At the same time, Apple could still keep plugging otherwise revenue loss causing privacy holes.

Artificial intelligence Art: Who Owns It?

Artificial intelligence Art: Who Owns It?

If your pet dog Hans takes a selfie, does he own the copyright? A recent decision by the U.S. Court of Appeals for the Ninth Circuit (“Ninth Circuit”) is instructive. It says that a monkey can’t own the copyright to his selfie. The reason? Only humans can own a copyright under U.S. law. But who owns artificial intelligence (“AI”) created artwork? This entry addresses that issue.

The Ninth Circuit Decision

The Indonesian monkey at the heart of the dispute is named “Naruto.” He is actually quite handsome, as you can see if you look up his profile shot – not on Linked In, of course. The story began on the island of Sulawesi, not Fantasy Island but close. David Slater, a British wildlife photographer, left his camera unattended. Naruto then picked up the camera and, harnessing his training at the British Museum School of Art and Design, began taking stunning photos of himself.

Whilst Gentleman’s Quarterly and other magazines sought to feature him in their publications, Naruto couldn’t be bothered. His images, posted by Mr. Slater, had already gone viral. Naruto retained the services of People for the Ethical Treatment of Animals (“PETA”) to sue Mr. Slater and his publishers for copyright infringement. The Ninth Circuit dismissed the suit because Naruto can’t own the copyright to the photos.

Unfortunately, Naruto couldn’t be reached for comment.

Part of the reasoning of the court was simple. The U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” The question raised by the decision is whether computer generated art is copyrightable and, if so, whether the AI – or its programmer – would be the owner.

AI Art & Blurred Lines

The issue of AI created artwork isn’t academic. According to one recent article in Art Net News, the Paris based collector Nicolas Laugero-Lasserre acquired Le Comte de Belamy, which was created by artificial intelligence. Mr. Laurgero-Lasserre bought the work directly from Obvious, a collective that created the AI behind Le Comte de Belamy. Instead of a signature, the artwork is signed by the AI using an equation. Naruto is jealous.

As AI gets smarter and more evolved, it will be capable of not only just creating art. Think of AI like that found in War Games (1984) which can create systems of engagement resembling  warfare. Then you extrapolate such a system to business. In such a case, a company like Obvious can create AI that spawns not only art but other companies, chock full of their own versions of Suri. This AI dominated world is laid out in movies like Her (2013), in which the main actor – Joaquin Phoenix – forms an intimate relationship with an AI app played by Scarlett Johansson. With the proliferation of synthetic body parts, imagining a full functioning AI cyborg that resembles a human isn’t as far-fetched as it may have sounded in the 1950s. The lines between fair use and copyright infringement have been blurred by mash-ups that modify music samples so that their identities become unrecognizable. Similarly, there will be blurred lines between human and AI created art as the years progress. The law needs to be ready to address these issues.

But, as the character Willie Stark explained in Robert Penn Warren’s All The Kings Men, “(the law) is like a single-bed blanket on a double bed and three folks in the bed and a cold night . . . There [not] ever enough blanket to cover the case, no matter how much pulling and hauling, and somebody is always going to catch pneumonia.” Maybe the shortcomings of the law in dealing with AI issues will always be here. But such shortcomings can be mitigated by policy makers who have foresight today as to where technology is heading tomorrow.

Public Domain Versus Work-For-Hire

If Naruto doesn’t own the copyright to the photo, then it would likely be in the public domain. However, an argument could be made that any art created by other animals who reside on government owned reserves or private property would be owned by the reserve or property owner. This is how a work-for-hire works in the U.S. While the author normally is the proper copyright owner, a work-for-hire arrangement gives the employer of the author the right. A similar approach could be taken by those who provide room and board to the likes of Naruto the handsome.

The issue remains about whether AI created art is also not subject to copyright because it was “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Using the reasoning of the Ninth Circuit, the answer would be that all such works are in the public domain. But then the question becomes whether one could make copies of Comte de Belamy in the U.S.  without worrying about a copyright infringement lawsuit. While several nations, such as the U.K., grant copyrights to a person who arranges for the creation of computer generated works, the U.S. does not.

Either the U.S. takes the U.K.’s lead or these works will end up in the public domain. This overly rigid approach as to what constitutes “intervention from a human author” would result in counterintuitive outcomes for companies like Obvious. By allowing owners of AI to own the creative works spawned by their systems, U.S. law could also conceivably give copyright rights to those who own the property on which the likes of Naruto the handsome reside.

Facebook: “Ad brokers are watching.”

Facebook: “Ad brokers are watching.”

Use Facebook?

If you do, then you’ll likely know about the recent controversy surrounding Cambridge Analytics. But didn’t Facebook know what Cambridge was doing? And didn’t Facebook knowingly directly share user data with prior political campaigns and other third party ad brokers?

Even if you don’t use Facebook, being aware of the privacy pitfalls that exist in the marketplace for your friends and family is most valuable.

To find out more, please watch this brief interview of me by Fox Business.

Please click HERE.

Blockchain — Not A New Cartier Pearl Necklace.

Blockchain — Not A New Cartier Pearl Necklace.

They say diamonds are a woman’s best friend. They also say a dog is a man’s best friend. But perhaps they are wrong?

Unless you have been living under a rock somewhere, which may not be such a bad idea, you’ve probably heard a great deal about “blockchain.” The thing is, most if not all of the explanations out there about blockchain involve complicated flow charts and confusing technological gibberish.

Want a common sense explanation that you could see at a Little Red School house in the Midwest? Click HERE for my interview by the Nordic Blockchain Association, where I use poetry — yes, that dirty five letter word — to explain the ins and outs of blockchain.

Blockchain — panacea or bubble?

Blockchain — panacea or bubble?

Blockchain technology is taking the world by storm. From banking to health care, many tout block chain and the bit coin it enables as a cure-all. Others think bit coin is heading towards the edge. In between are those who see practical applications of block chain but caution on addiction to bit coin. On February 26th at the University of Copenhagen, I will be making a presentation entitled “Blockchain technology — good, bad, or somewhere in between?” This entry gives you a sneak preview of that talk.

Many of you have heard about “bit coin” but most of you don’t realize that block chain is what enables bit coin. If you are not a technologist, I believe the best way to understand block chain is by analogy. Take the poem by Shel Silverstein entitled “Where The Sidewalk Ends.” Below are the first two lines:

There is a place where the sidewalk ends

And before the street begins . . .

Now imagine that you enter into a Google documents session with your best friend to edit the poem. Instead of “ends” in the first sentence, you put “begins.” Your friend replaces “before” in the second sentence with “after” and “ends” in place of “begins.” These changes are all recorded on Google docs. Both of you can see the changes. Envision, now, that instead of just you and your friend making changes to the poem, there is a community of millions around the world who are making the changes to the poem. This is what is commonly known in computer software as an “open source” network.

Blockchain enables bit coin to work in a similar way. You and your friend do a transaction with bit coin. This transaction will be represented by an idiosyncratic number known as a “cryptographic hash.” It will then be placed on a block and added to the chain of other blocks. The block chain runs sequentially. So your transaction’s hash “777XYZ . . . 1” will be inserted into the next block, and that block will follow hash “777XYZ . . .0.” This would be akin to each word you put into the poem above, “begins,” “after,” “ends,” being represented by a hash. For the poem to rhyme, each word must fit. Similarly, any inserted block that doesn’t fit in the chain will change subsequent blocks’ hash tags, making the chain resistance to tampering.

There are many benefits to block chain. One is that your identity can be represented by a cryptographic hash or signature so as to protect you from hackers. Once your identity is verified, all of your personal information is then erased. In some ways, then, you become a numerical avatar. The benefit of this is that hackers can’t get to your personal information, such as your address, because this information isn’t stored.

But bitcoin is likely another story. For one, there is no centralized regulator given its open sourced nature. This means that the value attributed to bit coin is arbitrary, and there is no floor under which it won’t go. Such decentralization and lack of a central regulator can cause it to crash without warning, since bit coin transactions can be fantastical in reality but nobody would know until its too late.

Blockchain technology is likely here to stay, since it can be used to protect identities, encrypt financial transactions, or even sensitive national secrets. But bitcoin may be a bubble in the making.

Popularity doesn’t equal truth

Popularity doesn’t equal truth

 Popularity doesn’t equal truth. And yet Facebook’s recent proposal to rank the trustworthiness of news sources based on popularity is loosely equating truth with popularity. In so doing, Facebook may be putting form over function.

During the housing crisis, numerous mortgage backed securities were rated “AAA.” These ratings were immensely popular. The ratings were from agencies like Moody’s. Little did many in the market know that these agencies received their fees by the very same banks who were underwriting, or brokering, the mortgage backed securities. As can be seen in movies like The Big Short, or by the financial injuries incurred by many who lost a great deal during the crisis, the securities in question were, in fact, junk. As a result, many of these credit reporting agencies were sued for their ratings via class action lawsuits. This bubble and resulting financial carnage wasn’t new. During the “Dutch tulip bulb bubble” in the early 1600s, prices for tulips were as much as six times a person’s salary. Prices then crashed afterwards to their pre-craze levels.

While Facebook is less likely to have legal exposure for infringing materials or defamatory news posted on the network, its new approach may change that. As a conduit of news rather than a publisher of it, Facebook normally takes an impartial approach towards items that you post on it. By inserting an algorithm into the picture which makes more popular news sources the more reliable ones, Facebook is becoming less an impartial umpire and more a participant in deciding what is true — and what isn’t. It would be akin to determining which works posted are not infringing and which are under fair use based on consensus as opposed to legal analysis.

If Facebook and sites of its ilk really want to combat “fake news,” they may want to think of spot auditing news sources. Like an IRS audit, Facebook would vet a source’s news story for factual veracity by comparing what is said to primary materials, like e-mails, written testimony, or other objectively verifiable information, rather than just leaving up to popularity. As can be seen from the 2016 Gallup poll, in which only 32% of Americans said they trust the media to “report the news accurately and fairly,” the lowest in history, measuring news by popularity isn’t the best benchmark for reliability.

And so Facebook’s policy may be putting form – popularity – over function – truth. It may be prudent to remember Plato’s warning: “no one is more hated than he who speaks the truth.” Perhaps the same can be said about unpopular but accurate or balanced news.

Want to stay out of jail? Read this.

Want to stay out of jail? Read this.

Stop!” Says the police officer. Do you need to stop? And when the officer wants to frisk you, must you let him or her do it? While much has been written about in the press recently about “stop and frisk,” the constitutional rules of the road are rarely covered. This entry provides a short primer.

Recently, I had the privilege of defending RC, a prominent Alabama artist, whose works appear at shops like Billy Reid on Bond Street in Manhattan, against a graffiti misdemeanor charge, among other things. Thankfully, I was able to get the charges dropped to a violation, which is not a crime. How did I do it? By ensuring that his Fourth Amendment rights were protected.

The Fourth Amendment prohibits unreasonable searches and seizures of you by the cops. Generally, cops need to obtain a warrant to search any area in which you have a reasonable expectation of privacy. Such areas include your messenger bag, jean pockets, or purse. If the cops directly or indirectly search such an area without a warrant, they are violating the your Fourth Amendment rights. Any related evidence obtained couldn’t be used against you.

However, there are certain exceptions which allow the cops to search or seize you without a warrant. One is “plain view.” For example, the New York City police department observes illegal graffiti materials peeking out from your backpack. Another exception is hot pursuit. New York City police officers see you spray painting a building in Chelsea, and then sprinting from the scene. In both cases, cops have a right to frisk you for any contraband, particularly after an arrest.

To stop you on the street, the cops need only have a reasonable suspicion that you are involved in criminal activity. To frisk you, the standard is higher. In that case, cops must have a reasonable suspicion that you are “armed and dangerous.” If one of the exceptions above applies, however, then they need not have such a suspicion. Barring that, the police cannot search areas of your person, such as your messenger bag, pockets, or purse, without you being considered “armed and dangerous.”

So the next time you are stopped by the police and have arguably broken some law, remember these general parameters. They can help protect your rights, and potentially keep you from going to jail.

Net Neutrality — Privacy Silver Bullet, or Can of Worms?

When FCC Chairman Ajit Pai announced last week that he would eliminate the “fair play” rules known as Net neutrality, he took a step that some economists and technologists worry will eventually lead to the monopolization of Internet services in America. What, if any, impact would the elimination of Net neutrality rules have on consumer privacy? The answer, in short, is that consumers would simply be forced to pay more for it. Before I explain why, let’s get on the same page about what Net neutrality means.

Net neutrality rules currently require Internet service providers to treat all content equally, with regard to quality and throughput, regardless of its size, shape, origin, or destination. In economic terms, the rules prohibit ISPs from creating premium classes of service, or “fast lanes.” In so doing, they treat ISPs as publicly regulated utilities.

They also benefit fledgling innovation. If a startup providing a service like end-to-end encryption needed to pay a “fast lane” premium to adequately serve its customers, it might not be able to adequately invest in its product-or reach any customers. But with Net neutrality rules, a nascent business faces the same barriers to reaching potential customers as those of entrenched technology titans such as Google and Facebook.

With Net neutrality’s one-size-fits-all approach, companies ostensibly requiring more bandwidth for more complex content aren’t able to pay more for preferential ISP treatment. That doesn’t directly impact privacy. But in the long term, it could. Profits otherwise available to ISP providers, but unavailable under Net neutrality, would not be reinvested to create more effective, and potentially less expensive, encryption methods. The benefits of such research and development can be seen in other industries, including pharmaceuticals.

One stipulation of the Net neutrality rules is that carriers must “protect the confidentiality of [consumers’] proprietary information” from unauthorized use and disclosure. Whether ISPs would uphold such privacy standards absent a legal requirement would likely correspond with their competitive landscape: More competition for a certain level of service might mean more consumer pressure to provide privacy protections, and vice versa.

With less competition, ISPs likely need more regulation to ensure that they adequately protect consumer privacy. Deregulation would result in privacy becoming more of a luxury than a right. Consumers, for example, might need to pay a premium for a level of Internet access that doesn’t throttle high-speed encrypted communications. At a cheaper, throttled level, they would have fewer and lower-quality choices for apps and services.

Whether Net neutrality’s privacy benefits are outweighed by its concomitant privacy costs is another question.

The Open Internet Order from 2015 requires compliance by ISPs with the Communications Assistance for Law Enforcement Act. Under CALE, telecommunications carriers must construct their network in such a way that they can give the government a backdoor into the network for surveillance purposes when presented with a warrant.

This law coupling enables courts under the Foreign Intelligence Surveillance Act to issue warrants to tap U.S. citizens’ communications devices, all without counsel to speak on citizens’ behalf. In the first 33 years of the FISA court’s existence, judges denied only 11 requests, resulting in a staggering 99.97 percent rate of approval, according to the Stanford Law Review.

There is also nothing in CALE, nor any Net neutrality law, that mandates the use of specific technology to protect consumer information, such as encryption. A $33 million judgment levied against Comcast for unintentionally listing phone numbers it had promised to keep private wasn’t the result of breaching any specific provisions of federal law mandating specific encryption methods.

Backdoor-access provisions already neutralize the consumer privacy benefits of Net neutrality laws. To think otherwise is to naively exchange the potentially prying private eyes of corporate America, which can’t imprison you, with those that can.

Fair use in the digital house of mirrors

Fair use in the digital house of mirrors

In today’s highly digitized world, copyright infringement actions, among others, are often brought against alleged infringers using information culled from Internet service provider addresses. While fair use defenses may exist against such suits, particularly when one is doing a music mash up, a preliminary question is whether the initial source evidence is accurate.

There exist technologies wherein users can mask themselves behind other users’ Internet service provider addresses. In this way, one can be located in Timbuktu, for example, and use an Internet service provider address of a user in the North Pole. By doing such masking, some users seek to avoid infringement lawsuits by using the address of another user, in essence leaving them with the hot infringement potato.

In prosecuting civil actions for unlawful downloads of Microsoft software, for example, it becomes imperative to understand such masking methods, and their limits. Prima facie evidence of the source of the infringement, while good for the initial stages of litigation, will evaporate upon further investigation. In some cases, a case brought without sufficient evidence of the source can, upon written documentary notice that the user wasn’t responsible for the download, such as via browser history evidence, lead to a motion for sanctions against plaintiff’s counsel for bringing a frivolous case.

Even with such evidence as to source, due attention needs to be paid to the transformative nature of the use. In digital music mash ups, for example, a sample from Mr. Bob Dylan recording can be modified, and blended into a new piece, so that the old version becomes impossible to recognize. In this case, the defendant likely has a bona fide fair use defense even when the attribution of the source is correct. Thus, in prosecuting a copyright infringement action, proper steps need to be made at the outset so that a sustainable case can be made.

Leaks, geeks, & reporters

Leaks, geeks, & reporters

The recent spat of Washington D.C. leaks is “unusually active,” according to FBI Director Mr. James Comey. Even if the leaks are as normal as they are in an allergic nose dealing with New Orleans spring pollen, what are the legal and ethical issues in leaking such confidential information, unknowingly reverse engineering it, or in publishing the leaks?

Generally speaking, liability for the leaker inside the government is clear. Numerous federal laws apply to confidential information circulated within the labyrinth of the federal government, and they generally hold such leakers criminally liable for the willful, and sometimes even negligent, disclosure (or even handling) of such information, including the identity of a Central Intelligence Agency (“CIA”) “covert agent” or President Trump’s tax returns.

However, what about geeks who reverse engineer publicly available information and then end up discovering government secrets and/or strategies through such analysis?

Imagine a modern day Matthew Broderick from WarGames (1983) who correctly intuits a covert government strategy to liquidate foreign ambassadors or heads of state via proxies and then warns how such molehill practices have on prior occasions caused mountains of problems. In the 13 century, Iraqis killed Genghis Khan’s chief envoy and had the beards of the others burned so they could travel back to him humiliated. Thereafter, Mr. Khan massacred almost all of the 200,000 to 1,000,000 inhabitants of Baghdad in one week, at the time the “House of Wisdom” in Islam’s Golden Age. Or take the assassination of Archduke Franz Ferdinand of Austria and his wife in 1914. Many believe that this killing led to the start of World War I, in which nine million combatants and seven million civilians died.

In such instances of geeky reverse engineering of covert government strategies, criminal liability will generally be lacking because such a geek would have no contractual or statutory responsibility to keep quiet, and his speech about an issue of grave public concern – potentially preventing a global conflict — would be protected by the First Amendment. Even so, would be geeks are well advised to consider reprisals from said officials, whether via Nixon type IRS audits or otherwise, and how to protect themselves against them. (Genghis Khan 2.0 protection is one way.)

That’s the leaker and the geek. What about the reporter?

The law in this area is murkier. While there are federal statutes which some have argued would impose criminal liability on a reporter for publishing confidential information, such as a Department of Defense (“DOD”) plan to defeat ISIS, prosecutions have been rare. The First Amendment generally protects the publication of such intelligence. However, in a case where the reporter and leaker work in concert (think offer-acceptance) to violate federal law, a conspiracy case can be brought against the reporter. What is more, prosecutions have been brought against reporters to reveal their confidential sources, as happened with Ms. Judith Miller of The New York Times when she refused to identify the source of information leading to the unmasking of a CIA covert agent, as many say happened in the case of Mrs. Valeire Plame under President George H. Bush’s tenure.

But even if there is no legal liability for the media professional, there is also the question of unintended consequences. Take, for example, a DOD strategy to replace ISIS with “new sheriff in town” Eddie Murphy. Assume a person within President Trump’s DOD, or CIA, who dislikes the President, and/or his political agenda, leaks such the details about the “Murphy Plan” to an unwitting New York Times reporter. The reporter is likely protected in publishing the plan, but should it be published? Asymmetrical information is the key to effective conflict, whether you are in the courtroom, on the battlefield, or in a chess match. Disclosing such a plan, especially if it is already being carried out but even if it hasn’t, would risk the lives of military personnel and/or threaten the security of major cities like New York, Boston, Chicago, and Los Angeles.

Any professional working in media would be well served not only to consider the legalities of reporting leaked information, but also such unintended but foreseeable potential blow back.