On Wednesday, the U.S. Court of Appeals for the 11th Circuit issued its long-awaited decision in LabMD’s challenge to an FTC enforcement action: LabMD, Inc. v. Federal Trade Commission (11th Cir. June 6, 2018). While there is some concern that the opinion will undermine the FTC’s power to enforce Section 5 for privacy and security issues, the opinion actually is quite narrow and is far from crippling.
While the LabMD opinion likely does have important implications for how the FTC will go about enforcing reasonable data security requirements, we think the opinion still allows the FTC to continue to build upon a coherent body of privacy and security complaints in an incremental way similar to how the common law develops. See Solove and Hartzog, The FTC and the New Common Law of Privacy, 114 Columbia Law Review 584 (2014).
I hope you enjoy my latest cartoon about data security — a twist on the angel on one shoulder and devil on the other. Humans are the weakest link for data security. Attempts to control people with surveillance or lots of technological restrictions often backfire. I believe that the most effective solution is to train people. It’s not perfect, but if training is done right, it can make a meaningful difference.
Recently published by Cambridge University Press, Re-Engineering Humanity explores how artificial intelligence, automated decisionmaking, the increasing use of Big Data are shaping the future of humanity. This excellent interdisciplinary book is co-authored by Professors Evan Selinger and Brett Frischmann, and it critically examines three interrelated questions. Under what circumstances can using technology make us more like simple machines than actualized human beings? Why does the diminution of our human potential matter? What will it take to build a high-tech future that human beings can flourish in? This is a book that will make you think about technology in a new and provocative way.
In In re Zappos.com, Inc., Customer Data Security Breach Litigation (9th Cir., Mar. 8, 2018), the U.S. Court of Appeals for the 9th Circuit issued a decision that represents a more expansive way to understand data security harm. The case arises out of a breach where hackers stole personal data on 24 million+ individuals. Although some plaintiffs alleged they suffered identity theft as a result of the breach, other plaintiffs did not. The district court held that the plaintiffs that hadn’t yet suffered an identity theft lacked standing.
Standing is a requirement in federal court that plaintiffs must allege that they have suffered an “injury in fact” — an injury that is concrete, particularized, and actual or imminent. If plaintiffs lack standing, their case is dismissed and can’t proceed. For a long time, most litigation arising out of data breaches was dismissed for lack of standing because courts held that plaintiffs whose data was compromised in a breach didn’t suffer any harm. Clapper v. Amnesty International USA,568 U.S. 398 (2013). In that case, the Supreme Court held that the plaintiffs couldn’t prove for certain that they were under surveillance. The Court concluded that the plaintiffs were merely speculating about future possible harm.
Early on, most courts rejected standing in data breach cases. A few courts resisted this trend, including the 9th Circuit in Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010). There, the court held that an increased future risk of harm could be sufficient to establish standing.