The global pandemic has affected everything. COVID-19 is not just grinding trials to a halt and foreclosing live, in-person judicial proceedings, it has changed the class action litigation landscape, including data breach class actions. I recently had the opportunity to discuss the pandemic’s impact on data breach class actions with Daniel Raymond, a cyber & tech claims manager based in Beazley’s Chicago office.
Privacy at the Margins: An Interview with Scott Skinner-Thompson on Privacy and Marginalized Groups
Recently, Professor Scott Skinner-Thompson (Colorado Law) published an excellent thought-provoking book, Privacy at the Margins (Cambridge University Press, 2020), which explores the important role that privacy plays for marginalized groups. The book is superb, and it is receiving the highest praise from leading scholars. For example, Dean Erwin Chemerinksy (Berkeley Law) proclaims that the book is “stunning in its originality, its clarity, and its insightful proposals for change.”
I am delighted to have the opportunity to interview Scott about the ideas and arguments in his book.
Standing in Data Breach Cases: Why Harm Is Not “Manufactured”
In a recent case, the U.S. Court of Appeals for the 11th Circuit weighed in on an issue that has continued to confound courts: Is there an injury caused by a data breach when victims don’t immediately suffer financial fraud? I wrote on this issue in an article with Professor Danielle Citron in 2018, Risk and Anxiety: A Theory of Data Breach Harms, 96 Texas Law Review 737 (2018). (Danielle and I have just completed a new piece on Privacy Harms ). In the article, Danielle and I examined the inconsistent and messy cases and attempted to set forth a coherent approach.
The most recent case to weigh in on the issue is Tan Tsao v. Captiva MVP Restaurant Partners, LLC, No. 18-14959 (11th Cir. Feb 4., 2021). PDQ, a fast food chicken restaurant chain, had a data breach where hackers accessed customer credit card data for a period of nearly a year. When the breach was announced, the plaintiff cancelled the credit cards he used at PDQ. In doing so, the plaintiff lost access to his preferred accounts, lost points and rewards, and expended time and effort. The Tsao court concluded that because the plaintiff couldn’t demonstrate that he suffered any credit card fraud, he lacked standing to sue.
In federal court, plaintiffs must demonstrate they they suffered a harm (actual or imminent injury) in order to sue. The plaintiff argued that he lost out on benefits when he cancelled his cards, but the court held that this was “manufactured” harm. The Tsao court relied on Clapper v. Amnesty International, 568 U.S. 398 (2013), where the U.S. Supreme Court held that plaintiffs can’t “manufacture” harm by spending money, time, and effort to protect themselves against surveillance that they couldn’t prove was occurring. Clapper‘s view on “manufactured” harm striking me as manufactured itself — a rather poorly-reasoned cooked-up excuse to deny standing. But the case is there, and it must be navigated around.
Privacy Harms
Professor Danielle Keats Citron (University of Virginia School of Law) and I have just posted a draft of our new article, Privacy Harms, on SSRN (free download). Here’s the abstract:
Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations.
The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm.Continue Reading
The M.D. Anderson Case and the Future of HIPAA Enforcement
The U.S. Court of Appeals for the 5th Circuit just issued a blistering attack on HIPAA enforcement by the U.S. Department of Health and Human Services (HHS). In University of Texas M.D. Anderson Cancer v. Department of Health and Human Services (No. 19-60226, Jan. 14, 2021), the 5th Circuit struck down a fine and enforcement action by HHS as arbitrary and capricious. This case has significant implications for HHS enforcement — and for agency enforcement more generally.
My reactions to the case are mixed. The court makes a number of good points, and it identifies flaws with HHS’s interpretation of HIPAA and with its enforcement approach. But there are parts of the opinion that overreach and that are unrealistic.
The case arises out of an HHS civil monetary penalty (CMP) against the University of Texas M.D. Anderson Cancer Center for $4,348,000 for a series of incidents involving unencrypted portable electronic devices being lost or stolen. In 2012, a faculty member had ePHI of 29,021 people on an unencrypted laptop that was stolen. Subsequently, in 2013, a trainee and visiting researcher lost unencrypted USB drives with ePHI of thousands of patients on them. HHS imposed a fine of $1.348 million for violating the HIPAA Encryption Rule for the 2012 incident and $1.5 million for each of the 2013 incidents, adding up to a total of $4.348 million.
Applying the Administrative Procedure Act (APA), the Fifth Circuit concluded that HHS’s enforcement was “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” 5 U.S.C. § 706(2). There are several parts of the court’s decision that are worth discussing.
Restoring the CDA Section 230 to What It Actually Says
When Donald Trump targeted the Communications Decency Act (CDA) Section 230, a debate about the law flared up. Numerous reforms were proposed, some even seeking to abolish the law. Unfortunately, the debate has been clouded with confusion and misinformation.
Although I disagree with many of the proposals to reform it or abolish Section 230, I have long believed that it has problems. A decade ago, I critiqued Section 230 extensively in my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (2007) (free download here).
The CDA Section230, at 47 U.S.C. § 230(c)(1), provides:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The actual text of the law is fine, and I wouldn’t change it. My proposal for reform would be for Congress to reissue Section 230 with the same text and instruct courts to follow the actual text of the law. The problem with Section 230 is that in a bout of free speech zeal, courts have interpreted the law to be far more extensive than it is written or should be.
The Myth of the Privacy Paradox: Final Published Version
I’m happy to announce that my article is now out in print!
The Myth of the Privacy Paradox
89 Geo. Wash. L. Rev. 1 (2021)
You can download a copy for free at SSRN.
Abstract:
In this Article, Professor Daniel Solove deconstructs and critiques the privacy paradox and the arguments made about it. The “privacy paradox” is the phenomenon where people say that they value privacy highly, yet in their behavior relinquish their personal data for very little in exchange or fail to use measures to protect their privacy.
Commentators typically make one of two types of arguments about the privacy paradox. On one side, the “behavior valuation argument” contends behavior is the best metric to evaluate how people actually value privacy. Behavior reveals that people ascribe a low value to privacy or readily trade it away for goods or services. The argument often goes on to contend that privacy regulation should be reduced.
On the other side, the “behavior distortion argument” suggests that people’s behavior is not an accurate metric of preferences because behavior is distorted by biases and heuristics, manipulation and skewing, and other factors.
Professor Solove argues instead that the privacy paradox is a myth created by faulty logic. The behavior involved in privacy paradox studies involves people making decisions about risk in very specific contexts. In contrast, people’s attitudes about their privacy concerns or how much they value privacy are much more general in nature. It is a leap in logic to generalize from people’s risk decisions involving specific personal data in specific contexts to reach broader conclusions about how people value privacy.
The behavior in the privacy paradox studies does not lead to a conclusion for less regulation. On the other hand, minimizing behavioral distortion will not cure people’s failure to protect their own privacy. Managing one’s privacy is a vast, complex, and never-ending project that does not scale. Privacy regulation often seeks to give people more privacy self-management, but doing so will not protect privacy effectively. Professor Solove argues instead that privacy law should focus on regulating the architecture that structures the way information is used, maintained, and transferred.
Click here to read the piece.
Video: A View to Next Year and Beyond with Travis LeBlanc, Simon McDougall, Daniel Solove, Justin Antonipillai
Please check out the conversation I had with Travis LeBlanc (Cooley), Simon McDougall (UK ICO) and Justin Antonipillai (Wirewheel) on Data Privacy Day. We discussed privacy developments for this year and beyond.
Cartoon: The Relationship Between Privacy and Data Security
I created this cartoon to highlight the relationship between privacy and data security. Privacy and data security are deeply interrelated. Unfortunately, privacy is often overlooked as a key dimension of keeping data secure. Minimizing data collection and ensuring that data isn’t retained for longer than needed both improve security immensely. When there’s a data breach, much less data is exposed if these good data practices are maintained.
Imagine security as a safe. The safe can be made of strong steel and have a great lock. But if the data in the safe is widely shared, how secure is it? Sometimes, organizations are so focused on guarding the back door that they leave the front door wide open.
Cartoon: Robots, CAPTCHA, and Privacy
This cartoon is about CAPTCHAs that people click to indicate that they are not robots. CAPTCHA is an acronym for Completely Automated Public Turing Test to Tell Computers and Humans Apart. Online, there is a scrum to gather and use data, a race to automate nearly everything, an invasion of good bots and bad bots that play an enormous role in the shape of the Internet.
In the old days, when questions would become too prying, people would say “None of your damn business!” These days, people often aren’t even asked; data is gathered at every turn, often surreptitiously.
If you want to license this cartoon or others for use, click here.