Google Privacy. Oxymoron?

Google Privacy. Oxymoron?

If you use Google, have you ever read their privacy policy? If you haven’t, please keep reading. The policy delineates under what circumstances Google can peruse your information, including your e-mails, for disclosure to third parties. While you may not think the policy will ever affect you, your e-mails and other messages on Google could be disclosed by the company under certain circumstances. What are they?

Generally, Google will not disclose your information to third parties without your consent and, in the case of sensitive personal information, without your “explicit consent.” The company’s privacy policy makes clear, however, that it may disclose your information to “affiliates and other trusted businesses or persons,” which is a pretty broad category. Google also makes another exception to “[m]eet any applicable law, regulation, legal process, or enforceable governmental request.” These categories could include, for example, a subpoena sent to Google from a plaintiff in a copyright infringement lawsuit or a court order.

Even when you receive a notice from Google that a subpoena seeks the disclosure of your information, or identity, you still have choices. The same is true of a court order. You can ask that Google “quash” the subpoena — which means that the subpoena is overly broad or seeks information irrelevant to the underlying lawsuit. In the case of a court order, it can be stayed pending an appeal. Needless to say, if you don’t care about your information being disclosed, you can do nothing. But if the subpoena seeks to unmask you so as to name you in a copyright infringement lawsuit, or otherwise, doing nothing may not be wise. A fair use other other defense may conclusively establish that the lawsuit is a sham.

Rather than using Google — or even social media outlets like Facebook — with your eyes closed, it is probably better to know what you are getting yourself into. Otherwise, you may be unpleasantly surprised one day when you find out what you thought was private isn’t.

FBI v. Apple – Round 2

FBI v. Apple – Round 2

How important is your iPhone privacy? Does it defeat law enforcement’s interest in obtaining evidence of child pornography productions from your iPhone? According to a recent New York Times article, Apple decided to plug a privacy hole in its iPhone through which law enforcement could crawl. This plug was in response to FBI’s previous end-run around iPhone software. You can read more about FBI v. Apple — Round 1 — here.

As the Times article makes clear, Indiana law enforcement officials used a $15,000.00 device from Gray Shift to unlock 96 iPhones in 2018, each time with a warrant. In Round 1, the magistrate judge essentially ordered Apple to create a back door through the iPhone’s encryption for the FBI to use. This overreaching doesn’t exist in Round 2. It doesn’t appear the Indiana warrants required Apple to create a back door. As with real property cases, law enforcement has a right to forcibly enter your property once they have a warrant.

But Apple’s plug now makes such devices likely obsolete. In so doing, Apple has made it harder for law enforcement to access your iPhone even when there is a warrant. Some district attorneys have argued, as pointed out in the article, that Apple is “blatantly protecting criminal activity.” Viewing Apple this way is black and white: either there is easy third party access and Apple is good or no third party access but then Apple is bad.

In so doing, a middle road is ignored. When law enforcement obtains a warrant to search your bitcoin that is stored in a Switzerland bunker, they will not be able to access it without help. The bunker is not linked to the internet. While the FBI could physically access the bunker, if necessary, the data on the blockchain is meaningless. All identities are protected by crypto hash signatures. One solution would be for the owner of the bunker — your bitcoin landlord — to obtain the information needed about your account and submit it to the judge for private (“in camera”) review. This solution was proposed in Round 1.

By giving evidence from an iPhone to the judiciary, Apple could proudly assist law enforcement’s prosecution of child pornography, among other things. At the same time, Apple could still keep plugging otherwise revenue loss causing privacy holes.

Artificial intelligence Art: Who Owns It?

Artificial intelligence Art: Who Owns It?

If your pet dog Hans takes a selfie, does he own the copyright? A recent decision by the U.S. Court of Appeals for the Ninth Circuit (“Ninth Circuit”) is instructive. It says that a monkey can’t own the copyright to his selfie. The reason? Only humans can own a copyright under U.S. law. But who owns artificial intelligence (“AI”) created artwork? This entry addresses that issue.

The Ninth Circuit Decision

The Indonesian monkey at the heart of the dispute is named “Naruto.” He is actually quite handsome, as you can see if you look up his profile shot – not on Linked In, of course. The story began on the island of Sulawesi, not Fantasy Island but close. David Slater, a British wildlife photographer, left his camera unattended. Naruto then picked up the camera and, harnessing his training at the British Museum School of Art and Design, began taking stunning photos of himself.

Whilst Gentleman’s Quarterly and other magazines sought to feature him in their publications, Naruto couldn’t be bothered. His images, posted by Mr. Slater, had already gone viral. Naruto retained the services of People for the Ethical Treatment of Animals (“PETA”) to sue Mr. Slater and his publishers for copyright infringement. The Ninth Circuit dismissed the suit because Naruto can’t own the copyright to the photos.

Unfortunately, Naruto couldn’t be reached for comment.

Part of the reasoning of the court was simple. The U.S. Copyright office “will refuse to register a claim if it determines that a human being did not create the work.” The office further states that it will exclude works “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” The question raised by the decision is whether computer generated art is copyrightable and, if so, whether the AI – or its programmer – would be the owner.

AI Art & Blurred Lines

The issue of AI created artwork isn’t academic. According to one recent article in Art Net News, the Paris based collector Nicolas Laugero-Lasserre acquired Le Comte de Belamy, which was created by artificial intelligence. Mr. Laurgero-Lasserre bought the work directly from Obvious, a collective that created the AI behind Le Comte de Belamy. Instead of a signature, the artwork is signed by the AI using an equation. Naruto is jealous.

As AI gets smarter and more evolved, it will be capable of not only just creating art. Think of AI like that found in War Games (1984) which can create systems of engagement resembling  warfare. Then you extrapolate such a system to business. In such a case, a company like Obvious can create AI that spawns not only art but other companies, chock full of their own versions of Suri. This AI dominated world is laid out in movies like Her (2013), in which the main actor – Joaquin Phoenix – forms an intimate relationship with an AI app played by Scarlett Johansson. With the proliferation of synthetic body parts, imagining a full functioning AI cyborg that resembles a human isn’t as far-fetched as it may have sounded in the 1950s. The lines between fair use and copyright infringement have been blurred by mash-ups that modify music samples so that their identities become unrecognizable. Similarly, there will be blurred lines between human and AI created art as the years progress. The law needs to be ready to address these issues.

But, as the character Willie Stark explained in Robert Penn Warren’s All The Kings Men, “(the law) is like a single-bed blanket on a double bed and three folks in the bed and a cold night . . . There [not] ever enough blanket to cover the case, no matter how much pulling and hauling, and somebody is always going to catch pneumonia.” Maybe the shortcomings of the law in dealing with AI issues will always be here. But such shortcomings can be mitigated by policy makers who have foresight today as to where technology is heading tomorrow.

Public Domain Versus Work-For-Hire

If Naruto doesn’t own the copyright to the photo, then it would likely be in the public domain. However, an argument could be made that any art created by other animals who reside on government owned reserves or private property would be owned by the reserve or property owner. This is how a work-for-hire works in the U.S. While the author normally is the proper copyright owner, a work-for-hire arrangement gives the employer of the author the right. A similar approach could be taken by those who provide room and board to the likes of Naruto the handsome.

The issue remains about whether AI created art is also not subject to copyright because it was “produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Using the reasoning of the Ninth Circuit, the answer would be that all such works are in the public domain. But then the question becomes whether one could make copies of Comte de Belamy in the U.S.  without worrying about a copyright infringement lawsuit. While several nations, such as the U.K., grant copyrights to a person who arranges for the creation of computer generated works, the U.S. does not.

Either the U.S. takes the U.K.’s lead or these works will end up in the public domain. This overly rigid approach as to what constitutes “intervention from a human author” would result in counterintuitive outcomes for companies like Obvious. By allowing owners of AI to own the creative works spawned by their systems, U.S. law could also conceivably give copyright rights to those who own the property on which the likes of Naruto the handsome reside.

Facebook: “Ad brokers are watching.”

Facebook: “Ad brokers are watching.”

Use Facebook?

If you do, then you’ll likely know about the recent controversy surrounding Cambridge Analytics. But didn’t Facebook know what Cambridge was doing? And didn’t Facebook knowingly directly share user data with prior political campaigns and other third party ad brokers?

Even if you don’t use Facebook, being aware of the privacy pitfalls that exist in the marketplace for your friends and family is most valuable.

To find out more, please watch this brief interview of me by Fox Business.

Please click HERE.

Blockchain — Not A New Cartier Pearl Necklace.

Blockchain — Not A New Cartier Pearl Necklace.

They say diamonds are a woman’s best friend. They also say a dog is a man’s best friend. But perhaps they are wrong?

Unless you have been living under a rock somewhere, which may not be such a bad idea, you’ve probably heard a great deal about “blockchain.” The thing is, most if not all of the explanations out there about blockchain involve complicated flow charts and confusing technological gibberish.

Want a common sense explanation that you could see at a Little Red School house in the Midwest? Click HERE for my interview by the Nordic Blockchain Association, where I use poetry — yes, that dirty five letter word — to explain the ins and outs of blockchain.

Blockchain — panacea or bubble?

Blockchain — panacea or bubble?

Blockchain technology is taking the world by storm. From banking to health care, many tout block chain and the bit coin it enables as a cure-all. Others think bit coin is heading towards the edge. In between are those who see practical applications of block chain but caution on addiction to bit coin. On February 26th at the University of Copenhagen, I will be making a presentation entitled “Blockchain technology — good, bad, or somewhere in between?” This entry gives you a sneak preview of that talk.

Many of you have heard about “bit coin” but most of you don’t realize that block chain is what enables bit coin. If you are not a technologist, I believe the best way to understand block chain is by analogy. Take the poem by Shel Silverstein entitled “Where The Sidewalk Ends.” Below are the first two lines:

There is a place where the sidewalk ends

And before the street begins . . .

Now imagine that you enter into a Google documents session with your best friend to edit the poem. Instead of “ends” in the first sentence, you put “begins.” Your friend replaces “before” in the second sentence with “after” and “ends” in place of “begins.” These changes are all recorded on Google docs. Both of you can see the changes. Envision, now, that instead of just you and your friend making changes to the poem, there is a community of millions around the world who are making the changes to the poem. This is what is commonly known in computer software as an “open source” network.

Blockchain enables bit coin to work in a similar way. You and your friend do a transaction with bit coin. This transaction will be represented by an idiosyncratic number known as a “cryptographic hash.” It will then be placed on a block and added to the chain of other blocks. The block chain runs sequentially. So your transaction’s hash “777XYZ . . . 1” will be inserted into the next block, and that block will follow hash “777XYZ . . .0.” This would be akin to each word you put into the poem above, “begins,” “after,” “ends,” being represented by a hash. For the poem to rhyme, each word must fit. Similarly, any inserted block that doesn’t fit in the chain will change subsequent blocks’ hash tags, making the chain resistance to tampering.

There are many benefits to block chain. One is that your identity can be represented by a cryptographic hash or signature so as to protect you from hackers. Once your identity is verified, all of your personal information is then erased. In some ways, then, you become a numerical avatar. The benefit of this is that hackers can’t get to your personal information, such as your address, because this information isn’t stored.

But bitcoin is likely another story. For one, there is no centralized regulator given its open sourced nature. This means that the value attributed to bit coin is arbitrary, and there is no floor under which it won’t go. Such decentralization and lack of a central regulator can cause it to crash without warning, since bit coin transactions can be fantastical in reality but nobody would know until its too late.

Blockchain technology is likely here to stay, since it can be used to protect identities, encrypt financial transactions, or even sensitive national secrets. But bitcoin may be a bubble in the making.

Popularity doesn’t equal truth

Popularity doesn’t equal truth

 Popularity doesn’t equal truth. And yet Facebook’s recent proposal to rank the trustworthiness of news sources based on popularity is loosely equating truth with popularity. In so doing, Facebook may be putting form over function.

During the housing crisis, numerous mortgage backed securities were rated “AAA.” These ratings were immensely popular. The ratings were from agencies like Moody’s. Little did many in the market know that these agencies received their fees by the very same banks who were underwriting, or brokering, the mortgage backed securities. As can be seen in movies like The Big Short, or by the financial injuries incurred by many who lost a great deal during the crisis, the securities in question were, in fact, junk. As a result, many of these credit reporting agencies were sued for their ratings via class action lawsuits. This bubble and resulting financial carnage wasn’t new. During the “Dutch tulip bulb bubble” in the early 1600s, prices for tulips were as much as six times a person’s salary. Prices then crashed afterwards to their pre-craze levels.

While Facebook is less likely to have legal exposure for infringing materials or defamatory news posted on the network, its new approach may change that. As a conduit of news rather than a publisher of it, Facebook normally takes an impartial approach towards items that you post on it. By inserting an algorithm into the picture which makes more popular news sources the more reliable ones, Facebook is becoming less an impartial umpire and more a participant in deciding what is true — and what isn’t. It would be akin to determining which works posted are not infringing and which are under fair use based on consensus as opposed to legal analysis.

If Facebook and sites of its ilk really want to combat “fake news,” they may want to think of spot auditing news sources. Like an IRS audit, Facebook would vet a source’s news story for factual veracity by comparing what is said to primary materials, like e-mails, written testimony, or other objectively verifiable information, rather than just leaving up to popularity. As can be seen from the 2016 Gallup poll, in which only 32% of Americans said they trust the media to “report the news accurately and fairly,” the lowest in history, measuring news by popularity isn’t the best benchmark for reliability.

And so Facebook’s policy may be putting form – popularity – over function – truth. It may be prudent to remember Plato’s warning: “no one is more hated than he who speaks the truth.” Perhaps the same can be said about unpopular but accurate or balanced news.

Want to stay out of jail? Read this.

Want to stay out of jail? Read this.

Stop!” Says the police officer. Do you need to stop? And when the officer wants to frisk you, must you let him or her do it? While much has been written about in the press recently about “stop and frisk,” the constitutional rules of the road are rarely covered. This entry provides a short primer.

Recently, I had the privilege of defending RC, a prominent Alabama artist, whose works appear at shops like Billy Reid on Bond Street in Manhattan, against a graffiti misdemeanor charge, among other things. Thankfully, I was able to get the charges dropped to a violation, which is not a crime. How did I do it? By ensuring that his Fourth Amendment rights were protected.

The Fourth Amendment prohibits unreasonable searches and seizures of you by the cops. Generally, cops need to obtain a warrant to search any area in which you have a reasonable expectation of privacy. Such areas include your messenger bag, jean pockets, or purse. If the cops directly or indirectly search such an area without a warrant, they are violating the your Fourth Amendment rights. Any related evidence obtained couldn’t be used against you.

However, there are certain exceptions which allow the cops to search or seize you without a warrant. One is “plain view.” For example, the New York City police department observes illegal graffiti materials peeking out from your backpack. Another exception is hot pursuit. New York City police officers see you spray painting a building in Chelsea, and then sprinting from the scene. In both cases, cops have a right to frisk you for any contraband, particularly after an arrest.

To stop you on the street, the cops need only have a reasonable suspicion that you are involved in criminal activity. To frisk you, the standard is higher. In that case, cops must have a reasonable suspicion that you are “armed and dangerous.” If one of the exceptions above applies, however, then they need not have such a suspicion. Barring that, the police cannot search areas of your person, such as your messenger bag, pockets, or purse, without you being considered “armed and dangerous.”

So the next time you are stopped by the police and have arguably broken some law, remember these general parameters. They can help protect your rights, and potentially keep you from going to jail.

Net Neutrality — Privacy Silver Bullet, or Can of Worms?

When FCC Chairman Ajit Pai announced last week that he would eliminate the “fair play” rules known as Net neutrality, he took a step that some economists and technologists worry will eventually lead to the monopolization of Internet services in America. What, if any, impact would the elimination of Net neutrality rules have on consumer privacy? The answer, in short, is that consumers would simply be forced to pay more for it. Before I explain why, let’s get on the same page about what Net neutrality means.

Net neutrality rules currently require Internet service providers to treat all content equally, with regard to quality and throughput, regardless of its size, shape, origin, or destination. In economic terms, the rules prohibit ISPs from creating premium classes of service, or “fast lanes.” In so doing, they treat ISPs as publicly regulated utilities.

They also benefit fledgling innovation. If a startup providing a service like end-to-end encryption needed to pay a “fast lane” premium to adequately serve its customers, it might not be able to adequately invest in its product-or reach any customers. But with Net neutrality rules, a nascent business faces the same barriers to reaching potential customers as those of entrenched technology titans such as Google and Facebook.

With Net neutrality’s one-size-fits-all approach, companies ostensibly requiring more bandwidth for more complex content aren’t able to pay more for preferential ISP treatment. That doesn’t directly impact privacy. But in the long term, it could. Profits otherwise available to ISP providers, but unavailable under Net neutrality, would not be reinvested to create more effective, and potentially less expensive, encryption methods. The benefits of such research and development can be seen in other industries, including pharmaceuticals.

One stipulation of the Net neutrality rules is that carriers must “protect the confidentiality of [consumers’] proprietary information” from unauthorized use and disclosure. Whether ISPs would uphold such privacy standards absent a legal requirement would likely correspond with their competitive landscape: More competition for a certain level of service might mean more consumer pressure to provide privacy protections, and vice versa.

With less competition, ISPs likely need more regulation to ensure that they adequately protect consumer privacy. Deregulation would result in privacy becoming more of a luxury than a right. Consumers, for example, might need to pay a premium for a level of Internet access that doesn’t throttle high-speed encrypted communications. At a cheaper, throttled level, they would have fewer and lower-quality choices for apps and services.

Whether Net neutrality’s privacy benefits are outweighed by its concomitant privacy costs is another question.

The Open Internet Order from 2015 requires compliance by ISPs with the Communications Assistance for Law Enforcement Act. Under CALE, telecommunications carriers must construct their network in such a way that they can give the government a backdoor into the network for surveillance purposes when presented with a warrant.

This law coupling enables courts under the Foreign Intelligence Surveillance Act to issue warrants to tap U.S. citizens’ communications devices, all without counsel to speak on citizens’ behalf. In the first 33 years of the FISA court’s existence, judges denied only 11 requests, resulting in a staggering 99.97 percent rate of approval, according to the Stanford Law Review.

There is also nothing in CALE, nor any Net neutrality law, that mandates the use of specific technology to protect consumer information, such as encryption. A $33 million judgment levied against Comcast for unintentionally listing phone numbers it had promised to keep private wasn’t the result of breaching any specific provisions of federal law mandating specific encryption methods.

Backdoor-access provisions already neutralize the consumer privacy benefits of Net neutrality laws. To think otherwise is to naively exchange the potentially prying private eyes of corporate America, which can’t imprison you, with those that can.

Fair use in the digital house of mirrors

Fair use in the digital house of mirrors

In today’s highly digitized world, copyright infringement actions, among others, are often brought against alleged infringers using information culled from Internet service provider addresses. While fair use defenses may exist against such suits, particularly when one is doing a music mash up, a preliminary question is whether the initial source evidence is accurate.

There exist technologies wherein users can mask themselves behind other users’ Internet service provider addresses. In this way, one can be located in Timbuktu, for example, and use an Internet service provider address of a user in the North Pole. By doing such masking, some users seek to avoid infringement lawsuits by using the address of another user, in essence leaving them with the hot infringement potato.

In prosecuting civil actions for unlawful downloads of Microsoft software, for example, it becomes imperative to understand such masking methods, and their limits. Prima facie evidence of the source of the infringement, while good for the initial stages of litigation, will evaporate upon further investigation. In some cases, a case brought without sufficient evidence of the source can, upon written documentary notice that the user wasn’t responsible for the download, such as via browser history evidence, lead to a motion for sanctions against plaintiff’s counsel for bringing a frivolous case.

Even with such evidence as to source, due attention needs to be paid to the transformative nature of the use. In digital music mash ups, for example, a sample from Mr. Bob Dylan recording can be modified, and blended into a new piece, so that the old version becomes impossible to recognize. In this case, the defendant likely has a bona fide fair use defense even when the attribution of the source is correct. Thus, in prosecuting a copyright infringement action, proper steps need to be made at the outset so that a sustainable case can be made.