News, Developments, and Insights

high-tech technology background with eyes on computer display

Kafka in the Age of AI and the Futility of Privacy as Control

Kafka in the Age of AI - an essay by Professors Daniel Solove and Woodrow Hartzog

I’m posting the final published version of my essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. 1021 (2024). It’s a short engaging read – just 20 pages!  We argue that although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.

You can download the article for free on SSRN.

Download Button 02 small

Scroll down for some excerpts from our PowerPoint presentation for this essay – images created by AI!

Continue Reading

Against Privacy Essentialism – My Response to Angel and Calo’s Critique of My Theory of Privacy

Against Privacy Essentialism - Solove Response to Angel and Calo 03

What is “privacy”? The concept of privacy has been elusive to define, but I developed a theory for understanding privacy about 20 years ago. Maria Angel and Ryan Calo recently published a formidable critique of my theory of privacy:

Maria P. Angel and Ryan Calo, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Colum. L. Rev. 507 (2024)

Their arguments are thoughtful and worth reckoning with, and I have just written a lengthy response, which I have entitled Against Privacy Essentialism. As I wrote in the response:

At the outset, I want to note that I welcome Angel and Calo’s critique. I am thrilled that they have so thoroughly engaged with my work, and I see their essay as an invitation to revisit the issue about how to conceptualize privacy. I thus write this essay with gratitude at having this opportunity to put my theory to the test, nearly twenty years after I started developing it, and seeing how it holds up today. Along the way, I will address other critiques of my pluralistic taxonomic approach by Professors Jeffrey Bellin, Eric Goldman, and David Pozen.

Here’s the abstract:

In this essay, Daniel Solove responds to Maria Angel and Ryan Calo’s critique of his theory of privacy. In their article, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Columbia Law Review 507 (2024), Angel and Calo note that although “Solove’s taxonomic approach appears to have exerted an extraordinary influence on the shape and scope of contemporary privacy law scholarship,” this approach “fails to provide a useful framework for determining what constitutes a privacy problem and, as a consequence, has begun to disserve the community.”

Solove argues that Angel and Calo wrongly view Solove’s conception of privacy as boundaryless and arbitrary, want the term “privacy” to do work it is not capable of, fail to show how the approach leads to bad consequences, and propose alternative approaches to conceptualizing privacy that fare worse on their own grounds of critique.

For the most part, Angel and Calo and several other critics of his theory view privacy in an essentialist manner. Their privacy essentialism involves their commitment to understanding privacy as having clear boundaries to demarcate it from other concepts and a definitive definition with a proper authoritative foundation.  

In this essay, Solove argues against privacy essentialism. This way of thinking unproductively narrows thought, creates silos, leads to the overly narrow or overly broad failed attempts at conceptualizing privacy, stunts the development of the field, and results in constricted and impoverished policymaking.  Solove argues that privacy essentialism leads to a dead end, and it merely provides the illusion of certainty and clarity.

For those of you interested in reading about my theory of privacy, it is developed in several works. The book is the most recent and complete version of the theory, and it incorporates, updates, and adds to the articles:

Continue Reading

Murky Consent: An Approach to the Fictions of Consent in Privacy Law – FINAL VERSION

Article - Murky Consent - Fictions of Privacy Consent - Solove 01

I’m delighted to share the newly-published final version of my article:

Murky Consent: An Approach to the Fictions of Consent in Privacy Law
104 B.U. L. Rev. 593 (2024)

I’ve been pondering privacy consent for more than a decade, and I think I finally made a breakthrough with this article.  I welcome feedback and hope you enjoy the piece.

Mini Abstract:

In this Article I argue that most of the time, privacy consent is fictitious. Instead of futile efforts to try to turn privacy consent from fiction to fact, the better approach is to lean into the fictions. The law can’t stop privacy consent from being a fairy tale, but the law can ensure that the story ends well. I argue that privacy consent should confer less legitimacy and power and that it be backstopped by a set of duties on organizations that process personal data based on consent.

Also check out Prof Stacy-Ann Elvy’s insightful response piece, Privacy Law’s Consent Conundrum. Additionally, here’s a video of me presenting the article.

Full Abstract:

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

Continue Reading

A Regulatory Roadmap to AI and Privacy

Solove - Infographic- AI and PrivacyOver at IAPP News, I wrote a short essay called A Regulatory Roadmap to AI and Privacy.  It summarizes my longer article, Artificial Intelligence and Privacy. For those who want the short 2,000 word version of my thoughts on AI and privacy, read the short essay. For those who want more detail, then read the full article.

I created an infographic to capture the issues, but I couldn’t include it in the IAPP piece, so I’ll include it here (see above).

For easier reading and printing, here’s a PDF version of the short essay with the infographic included. Or you can check out my essay at IAPP. The long article is here.

From the short essay:

Although new AI laws can help, AI is making it glaringly clear that a privacy law rethink is long overdue. . . .

Understanding the privacy challenges posed by AI is essential. A comprehensive overview is necessary to evaluate the effectiveness of current laws, identify their limitations and decide what modifications or new measures are required for adequate regulation.

Button Read 01

Continue Reading

Webinar – Another Privacy Bill on Capitol Hill: The American Privacy Rights Act Blog

In case you missed my recent webinar with Laura Riposo VanDruff and Jules Polonetsky, you can watch the replay here.   We discussed the strengths and weaknesses of the American Privacy Rights Act (APRA) and its likelihood of passing.

Button Watch Webinar 02

Continue Reading

AI, Algorithms, and Awful Humans – Final Published Version

Article - Solove Matsumi AI Algorithms Awful Humans 09

I am pleased to share the final published version of my short essay with Yuki Matsumi. It was written for a symposium in Fordham Law Review.

AI, Algorithms, and Awful Humans
92 Fordham L. Rev. 1923 (2024)

Mini Abstract:

This Essay critiques arguments that algorithmic decision-making is better than human decision-making. Two arguments are often advanced to justify the increasing use of algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. We argue that such contentions are far too optimistic and fail to appreciate the shortcomings of machine decisions and the difficulties in combining human and machine decision-making. Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often do not mix well. Humans often perform badly when reviewing algorithmic output.

Download the piece for free here:

Article - Solove Matsumi AI Algorithms Awful Humans 10

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03


Cover - Privacy Law Fundamentals 06

HOT OFF THE PRESS!  Privacy Law Fundamentals, Seventh Edition (2024).  This is my short guide to privacy law with Professor Paul Schwartz (Berkeley Law).

Believe it or not, there have been some new developments in privacy law . . .

“This book is an indispensable guide for privacy and data protection practitioners, students, and scholars. You will find yourself consulting it regularly, as I do. It is a must for your bookshelf” – Danielle Citron, University of Virginia Law School

“Two giants of privacy scholarship succeed in distilling their legal expertise into an essential guide for a broad range of the legal community. Whether used to learn the basics or for quick reference, Privacy Law Fundamentals proves to be concise and authoritative.” – Jules Polonetsky, Future of Privacy Forum

Button Learn More 01

If you’re interested in the digital edition, click here.

Cover - Privacy Law Fundamentals Digital

Button Learn More 01

Continue Reading

Webinar – The FTC, Privacy, and AI Blog

In case you missed my recent webinar with Maneesha Mithal, you can watch the replay here.  We discussed recent FTC enforcement actions, algorithmic deletion, the FTC’s current rulemaking, enforcement of the health breach notification rule, the FTC’s role in regulating AI, and other issues. Button Watch Webinar 02

Continue Reading

The Failure of Data Security Law

Failure of Data Security Law - Solove and Hartzog 02

Professor Woodrow Hartzog and I are posting The Failure of Data Security Law as a free download on SSRN. This is a chapter is from our book, BREACHED! WHY DATA SECURITY LAW FAILS AND HOW TO IMPROVE IT

In this book chapter, we survey the law and policy of data security and analyze its strengths and weaknesses. Broadly speaking, there are three types of data security laws: (1) breach notification laws; (2) security safeguards laws that require substantive measures to protect security; and (3) private litigation under various causes of action. We argue that despite some small successes, the law is generally failing to combat the data security threats we face.

Breach notification laws merely require organizations to provide transparency about data breaches, but the laws don’t provide prevention or a cure. Security safeguards laws are often enforced too late, if at all. Enforcement authorities wait until a data breach occurs, but penalizing organizations after a breach increases the pain of a breach marginally, but not enough to be a game changer. Private litigation has increased the costs of data breaches but has accomplished little else. Courts have often struggled to understand the harm from data breaches, so data breach cases have frequently been dismissed.

Overall, we contend that data security law is too reactionary. The law fails to do enough to prevent data breaches, focuses too much on organizations that suffer data breaches and ignores other contributing actors, and doesn’t take sufficient steps to mitigate the harm from data breaches.

Failure of Data Security Law - Solove and Hartzog 03

This chapter can stand alone, but of course, we encourage you to read our whole book, BREACHED! WHY DATA SECURITY LAW FAILS AND HOW TO IMPROVE IT

Cover - Breached 3D 03

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03