All posts tagged Spokeo v. Robins

The Trouble with Spokeo: Standing, Privacy Harms, and Biometric Information

Daniel Solove
Founder of TeachPrivacy

Rivera v Google BIPA - Illinois Biometric Information Privacy Act - Facial Recognition - Spokeo

A recent case involving the Illinois Biometric Information Privacy Act (BIPA), Rivera v Google (N.D. Ill. No. 16 C 02714, Dec. 28, 2018), puts the ills of Spokeo Inc. v. Robins on full display.  In Riveraplaintiffs sued Google under BIPA, which prohibits companies from collecting and storing specific types of biometric data without people’s consent.  The plaintiffs alleged that Google collected and used their face-geometry scans through Google Photos without their consent.  Google’s face recognition feature is defaulted to being on unless users opt out.  Instead of addressing the merits of the plaintiffs’ lawsuit under BIPA, the court dismissed the case for lack of standing based on Spokeo, a fairly recent U.S. Supreme Court case on standing.

Spokeo is a terrible decision by the U.S. Supreme Court.  It purports to be an attempt to clarify the test for standing to sue in federal court, but it flunks on clarity and coherence.  I previously wrote an extensive critique of Spokeo when the decision came out in 2016.

Beyond Spokeo‘s incoherent mess, there is another part of the opinion that is far worse — Spokeo authorizes courts to override legislatures in determining whether there’s a cognizable privacy harm under a legislature’s own statute.  This part of Spokeo is a major usurpation of legislative power — it undermines a legislature’s determination about the proper remedies for violations of its own laws.

Continue Reading

Risk and Anxiety: A Theory of Data Breach Harms

Daniel Solove
Founder of TeachPrivacy

Risk and Anxiety Theory of Data Breach Harms

My new article was just published: Risk and Anxiety: A Theory of Data Breach Harms,  96 Texas Law Review 737 (2018).  I co-authored the piece with Professor Danielle Keats Citron.  We argue that the issue of harm needs a serious rethinking. Courts are too quick to conclude that data breaches don’t create harm.  There are two key dimensions to data breach harm — risk and anxiety — both of which have been an area of struggle for courts.

Many courts find that anything involving risk is too difficult to measure and not concrete enough to constitute actual injury. Yet, outside of the world of the judiciary, other fields and industries have recognized risk as something concrete. Today, risk is readily quantified, addressed, and factored into countless decisions of great importance. As we note in the article: “Ironically, the very companies being sued for data breaches make high-stakes decisions about cyber security based upon an analysis of risk.” Despite the challenges of addressing risk, courts in other areas of law have done just that. These bodies of law are oddly ignored in data breach cases.

When it comes to anxiety — the emotional distress people might feel based upon a breach — courts often quickly dismiss it by noting that emotional distress alone is too vague and unsupportable in proof to be recognized as harm. Yet in other areas of law, emotional distress alone is sufficient to establish harm. In many cases, this fact is so well-settled that harm is rarely an issue in dispute.

We aim to provide greater coherence to this troubled body of law.   We work our way through a series of examples — various types of data breach — and discuss whether harm should be recognized. We don’t think harm should be recognized in all instances, but there are many situations where we would find harm where the majority of courts today would not.

The article can be downloaded for free on SSRN.

Here’s the abstract:

Continue Reading