Cybersecurity litigation is currently at a crossroads. Courts have struggled in these cases, coming out in wildly inconsistent ways about whether a data breach causes harm. Although the litigation landscape is uncertain, there are some near certainties about cybersecurity generally: There will be many data breaches, and they will be terrible and costly. We thus have seen the rise of cybersecurity insurance to address this emergent and troublesome risk vector.
I am delighted to be interviewing Kimberly Horn, who is the Global Focus Group Leader for Cyber Claims at Beazley. Kim has significant experience in data privacy and cyber security matters, including guiding insureds through immediate and comprehensive responses to data breaches and network intrusions. She also has extensive experience managing class action litigation, regulatory investigations, and PCI negotiations arising out of privacy breaches.
I have a confession to make, one that is difficult to fess up to on the US side of the pond: I love the GDPR.
There, I said it. . .
In the United States, a common refrain about GDPR is that it is unreasonable, unworkable, an insane piece of legislation that doesn’t understand how the Internet works, and a dinosaur romping around in the Digital Age.
But the GDPR isn’t designed to be followed as precisely as one would build a rocket ship. It’s an aspirational law. Although perfect compliance isn’t likely, the practical goal of the GDPR is for organizations to try hard, to get as much of the way there as possible.
The GDPR is the most profound privacy law of our generation. Of course, it’s not perfect, but it has more packed into it than any other privacy law I’ve seen. The GDPR is quite majestic in its scope and ambition. Rather than shy away from tough issues, rather than tiptoe cautiously, the GDPR tackles nearly everything.
Here are 10 reasons why I love the GDPR:
(1) Omnibus and Comprehensive
Unlike the law in the US, which is sectoral (each law focuses on specific economic sectors), the GDPR is omnibus – it sets a baseline of privacy protections for all personal data.
This baseline is important. In the US, protection depends upon not just the type of data but the entities that hold it. For example, HIPAA doesn’t protect all health data, only health data created or maintained by specific types of entities. Health data people share with a health app, for example, might not be protected at all by HIPAA. This is quite confusing to individuals. In the EU, the baseline protections ensure that nothing falls through the cracks.
In In re Zappos.com, Inc., Customer Data Security Breach Litigation (9th Cir., Mar. 8, 2018), the U.S. Court of Appeals for the 9th Circuit issued a decision that represents a more expansive way to understand data security harm. The case arises out of a breach where hackers stole personal data on 24 million+ individuals. Although some plaintiffs alleged they suffered identity theft as a result of the breach, other plaintiffs did not. The district court held that the plaintiffs that hadn’t yet suffered an identity theft lacked standing.
Standing is a requirement in federal court that plaintiffs must allege that they have suffered an “injury in fact” — an injury that is concrete, particularized, and actual or imminent. If plaintiffs lack standing, their case is dismissed and can’t proceed. For a long time, most litigation arising out of data breaches was dismissed for lack of standing because courts held that plaintiffs whose data was compromised in a breach didn’t suffer any harm. Clapper v. Amnesty International USA, 568 U.S. 398 (2013). In that case, the Supreme Court held that the plaintiffs couldn’t prove for certain that they were under surveillance. The Court concluded that the plaintiffs were merely speculating about future possible harm.
Early on, most courts rejected standing in data breach cases. A few courts resisted this trend, including the 9th Circuit in Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010). There, the court held that an increased future risk of harm could be sufficient to establish standing.
My new article was just published: Risk and Anxiety: A Theory of Data Breach Harms, 96 Texas Law Review 737 (2018). I co-authored the piece with Professor Danielle Keats Citron. We argue that the issue of harm needs a serious rethinking. Courts are too quick to conclude that data breaches don’t create harm. There are two key dimensions to data breach harm — risk and anxiety — both of which have been an area of struggle for courts.
Many courts find that anything involving risk is too difficult to measure and not concrete enough to constitute actual injury. Yet, outside of the world of the judiciary, other fields and industries have recognized risk as something concrete. Today, risk is readily quantified, addressed, and factored into countless decisions of great importance. As we note in the article: “Ironically, the very companies being sued for data breaches make high-stakes decisions about cyber security based upon an analysis of risk.” Despite the challenges of addressing risk, courts in other areas of law have done just that. These bodies of law are oddly ignored in data breach cases.
When it comes to anxiety — the emotional distress people might feel based upon a breach — courts often quickly dismiss it by noting that emotional distress alone is too vague and unsupportable in proof to be recognized as harm. Yet in other areas of law, emotional distress alone is sufficient to establish harm. In many cases, this fact is so well-settled that harm is rarely an issue in dispute.
We aim to provide greater coherence to this troubled body of law. We work our way through a series of examples — various types of data breach — and discuss whether harm should be recognized. We don’t think harm should be recognized in all instances, but there are many situations where we would find harm where the majority of courts today would not.
The article can be downloaded for free on SSRN.
Here’s the abstract:
In this post, I provide a brief overview of my scholarship last year.
I co-authored Risk and Anxiety: A Theory of Data Breach Harms with Professor Daniel Keats Citron. The piece is forthcoming in Texas Law Review this year. Even though there continues to be a steady flow of data breaches, there remains significant confusion in the courts around the issue of harm. Courts struggle with data breach harms because they are intangible, risk-oriented, and diffuse. Professor Citron and I argue: “Despite the intangible nature of these injuries, data breaches inflict real compensable injuries. Data breaches raise significant public concern and legislative activity. Would all this concern and activity exist if there were no harm? Why would more than 90% of the states pass data-breach notification laws in the past decade if breaches did not cause harm?” We provide examples of different types of data breaches and discuss whether harm should be recognized. We argue that there are many instances where we would find harm that the majority of courts today would not.
Download Risk and Anxiety: A Theory of Data Breach Harms for free.
Harm has become the key issue in data breach cases. During the past 20 years, there have been hundreds of lawsuits over data breaches. In many cases, the plaintiffs have evidence to establish that reasonable care wasn’t used to protect their data. But the cases have often been dismissed because courts conclude that the plaintiffs have not suffered harm as a result of the breach. Some courts are beginning to recognize harm, leading to significant inconsistency and uncertainty in this body of law.
When is a person harmed by a privacy violation?
The U.S. Supreme Court just handed down a decision in an important case, Spokeo Inc. v. Robins.
Plaintiff Thomas Robins sued Spokeo under the Fair Credit Reporting Act (FCRA) because Spokeo had inaccurate information about him in its profile. Spokeo’s profiles are used by potential employers and others to search for data about people. FCRA requires that information in profiles for these purposes be accurate, and it allows people to sue if information is not.
I am pleased to announce that Alan Westin’s classic work, Privacy and Freedom, is now back in print. Originally published in 1967, Privacy and Freedom had an enormous influence in shaping the discourse on privacy in the 1970s and beyond, when the Fair Information Practice Principles (FIPPs) were developed.
The book contains a short introduction by me. I am truly honored to be introducing such a great and important work. When I began researching and writing about privacy in the late 1990s, I kept coming across citations to Westin’s book, and I was surprised that it was no longer in print. I tracked down a used copy, which wasn’t as easy to do as today. What impressed me most about the book was that it explored the meaning and value of privacy in a rich and interdisciplinary way.
A very brief excerpt from my intro:
At the core of the book is one of the most enduring discussions of the definition and value of privacy. Privacy is a very complex concept, and scholars and others have struggled for centuries to define it and articulate its value. Privacy and Freedom contains one of the most sophisticated, interdisciplinary, and insightful discussions of privacy ever written. Westin weaves together philosophy, sociology, psychology, and other disciplines to explain what privacy is and why we should protect it.
I was fortunate to get to know Alan Westin, as I began my teaching career at Seton Hall Law School in Newark, New Jersey, and Alan lived and worked nearby. I had several lunches with him, and we continued our friendship when I left to teach at George Washington University Law School. Alan was kind, generous, and very thoughtful. He was passionate about ideas. I miss him greatly.
So it is a true joy to see his book live on in print once again.
Here’s the blurb from the publisher:
By Daniel J. Solove
What is privacy? This is a central question to answer, because a conception of privacy underpins every attempt to address it and protect it. Every court that holds that something is or isn’t privacy is basing its decision on a conception of privacy — often unstated. Privacy laws are also based on a conception of privacy, which informs what things the laws protect. Decisions involving privacy by design also involve a conception of privacy. When privacy is “baked into” products and services, there must be some understanding of what is being baked in.
Far too often, conceptions of privacy are too narrow, focusing on keeping secrets or avoiding disclosure of personal data. Privacy is much more than these things. Overly narrow conceptions of privacy lead to courts concluding that there is no privacy violation when something doesn’t fit the narrow conception. Narrow or incomplete conceptions of privacy lead to laws that fail to address key problems. Privacy by design can involve throwing in a few things and calling it “privacy,” but this is like cooking a dish that requires 20 ingredients but only including 5 of them.
It is thus imperative to think through what privacy is. If you have an overly narrow or incomplete conception of privacy, you’re not going to be able to effectively identify privacy risks or protect privacy.
In my work, I have attempted to develop a practical and useable conception of privacy. In what follows, I will briefly describe what I have developed.
By Daniel J. Solove
The recent breach of the Office of Personnel Management (OPM) network involved personal data on millions of federal employees, including data related to background checks. OPM is now offering 18 months of free credit monitoring and identity theft insurance to victims. But as experts note in a recent Washington Post article, this is not nearly enough:
If the data is in the hands of traditional cyber criminals, the 18-month window of protection may not be enough to protect workers from harm down the line. “The data is sold off, and it could be a while before it’s used,” said Michael Sussmann, a partner in the privacy and data security practice at law firm Perkins Coie. “There’s often a very big delay before having a loss.”