A recent case involving the Illinois Biometric Information Privacy Act (BIPA), Rivera v Google (N.D. Ill. No. 16 C 02714, Dec. 28, 2018), puts the ills of Spokeo Inc. v. Robinson full display. In Rivera, plaintiffs sued Google under BIPA, which prohibits companies from collecting and storing specific types of biometric data without people’s consent. The plaintiffs alleged that Google collected and used their face-geometry scans through Google Photos without their consent. Google’s face recognition feature is defaulted to being on unless users opt out. Instead of addressing the merits of the plaintiffs’ lawsuit under BIPA, the court dismissed the case for lack of standing based on Spokeo, a fairly recent U.S. Supreme Court case on standing.
Spokeo is a terrible decision by the U.S. Supreme Court. It purports to be an attempt to clarify the test for standing to sue in federal court, but it flunks on clarity and coherence. I previously wrote an extensive critique of Spokeo when the decision came out in 2016.
Beyond Spokeo‘s incoherent mess, there is another part of the opinion that is far worse — Spokeo authorizes courts to override legislatures in determining whether there’s a cognizable privacy harm under a legislature’s own statute. This part of Spokeo is a major usurpation of legislative power — it undermines a legislature’s determination about the proper remedies for violations of its own laws.
Move over robocop, there’s a new constable in town — the robocall cop. In the past decade, robocalls have surged. There has also been a dramatic rise in litigation about these calls under the Telephone Consumer Protection Act (TCPA). The TCPA litigation is led by a small group of serial litigators, people who have assumed the role of private enforcers of the TCPA. This is a fascinating story about how privacy law combats the growing scourge of robocalls. We are seeing the effective use of private litigation as an enforcement tool, but there are differing interpretations about the virtues of the robocall cops. Also wrapped up on the story is the issue of harm.
Robocalls are rising at an alarming rate. In the month of September 2017 alone, there were 2.4 billion robocalls. The number keeps rising per month, and September 2018 gave birth to 4.1 billion robocalls. At this rate, there may be billions and billions more robocalls than stars in the universe! Robocalls are definitely a problem. I’ve never heard of anyone who likes robocalls; the mosquito probably ranks higher in popularity. But robocalls persist and proliferate. Annually, in the United States, the number of robocalls exceeds 100 per person. There are 4.5 million robocall complaints per year to the FTC.
Along with the rise of robocalls, litigation has also been increasing. Lawsuits are perhaps a bit more popular than robocalls or mosquitos, but not by much. The TCPA, 47 U.S.C. § 227, passed in 1991, requires various forms of prior consent for robocalls, which are calls made with what the TCPA refers to as an “automatic telephone dialing system” (ATDS). Violations of the TCPA can be enforced through a private right of action, and there are statutory damages of $500 per violation ($1,500 for willful violations). The number of TCPA lawsuits has skyrocketed, from 14 federal cases in 2007 to 4,392 federal cases in 2017.
Cybersecurity litigation is currently at a crossroads. Courts have struggled in these cases, coming out in wildly inconsistent ways about whether a data breach causes harm. Although the litigation landscape is uncertain, there are some near certainties about cybersecurity generally: There will be many data breaches, and they will be terrible and costly. We thus have seen the rise of cybersecurity insurance to address this emergent and troublesome risk vector.
I am delighted to be interviewing Kimberly Horn, who is the Global Focus Group Leader for Cyber Claims at Beazley. Kim has significant experience in data privacy and cyber security matters, including guiding insureds through immediate and comprehensive responses to data breaches and network intrusions. She also has extensive experience managing class action litigation, regulatory investigations, and PCI negotiations arising out of privacy breaches.
I have a confession to make, one that is difficult to fess up to on the US side of the pond: I love the GDPR.
There, I said it. . .
In the United States, a common refrain about GDPR is that it is unreasonable, unworkable, an insane piece of legislation that doesn’t understand how the Internet works, and a dinosaur romping around in the Digital Age.
But the GDPR isn’t designed to be followed as precisely as one would build a rocket ship. It’s an aspirational law. Although perfect compliance isn’t likely, the practical goal of the GDPR is for organizations to try hard, to get as much of the way there as possible.
The GDPR is the most profound privacy law of our generation. Of course, it’s not perfect, but it has more packed into it than any other privacy law I’ve seen. The GDPR is quite majestic in its scope and ambition. Rather than shy away from tough issues, rather than tiptoe cautiously, the GDPR tackles nearly everything.
Here are 10 reasons why I love the GDPR:
(1) Omnibus and Comprehensive
Unlike the law in the US, which is sectoral (each law focuses on specific economic sectors), the GDPR is omnibus – it sets a baseline of privacy protections for all personal data.
This baseline is important. In the US, protection depends upon not just the type of data but the entities that hold it. For example, HIPAA doesn’t protect all health data, only health data created or maintained by specific types of entities. Health data people share with a health app, for example, might not be protected at all by HIPAA. This is quite confusing to individuals. In the EU, the baseline protections ensure that nothing falls through the cracks.
In In re Zappos.com, Inc., Customer Data Security Breach Litigation (9th Cir., Mar. 8, 2018), the U.S. Court of Appeals for the 9th Circuit issued a decision that represents a more expansive way to understand data security harm. The case arises out of a breach where hackers stole personal data on 24 million+ individuals. Although some plaintiffs alleged they suffered identity theft as a result of the breach, other plaintiffs did not. The district court held that the plaintiffs that hadn’t yet suffered an identity theft lacked standing.
Standing is a requirement in federal court that plaintiffs must allege that they have suffered an “injury in fact” — an injury that is concrete, particularized, and actual or imminent. If plaintiffs lack standing, their case is dismissed and can’t proceed. For a long time, most litigation arising out of data breaches was dismissed for lack of standing because courts held that plaintiffs whose data was compromised in a breach didn’t suffer any harm. Clapper v. Amnesty International USA,568 U.S. 398 (2013). In that case, the Supreme Court held that the plaintiffs couldn’t prove for certain that they were under surveillance. The Court concluded that the plaintiffs were merely speculating about future possible harm.
Early on, most courts rejected standing in data breach cases. A few courts resisted this trend, including the 9th Circuit in Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010). There, the court held that an increased future risk of harm could be sufficient to establish standing.