Here’s a roundup of my scholarship for 2024. But first, a preview of my forthcoming book (Feb 2025):
ON PRIVACY AND TECHNOLOGY
(Oxford University Press) – Available for Pre-Order
From the book jacket:
Succinct and eloquent, On Privacy and Technology is an essential primer on how to face the threats to privacy in today’s age of digital technologies and AI.
With the rapid rise of new digital technologies and artificial intelligence, is privacy dead? Can anything be done to save us from a dystopian world without privacy?
In this short and accessible book, internationally renowned privacy expert Daniel J. Solove draws from a range of fields, from law to philosophy to the humanities, to illustrate the profound changes technology is wreaking upon our privacy, why they matter, and what can be done about them. Solove provides incisive examinations of key concepts in the digital sphere, including control, manipulation, harm, automation, reputation, consent, prediction, inference, and many others.
Compelling and passionate, On Privacy and Technology teems with powerful insights that will transform the way you think about privacy and technology.
New Edition of PRIVACY LAW FUNDAMENTALS
(with Professor Paul Schwartz)
“This book is an indispensable guide for privacy and data protection practitioners, students, and scholars. You will find yourself consulting it regularly, as I do. It is a must for your bookshelf” – Danielle Citron, University of Virginia Law School
“Two giants of privacy scholarship succeed in distilling their legal expertise into an essential guide for a broad range of the legal community. Whether used to learn the basics or for quick reference, Privacy Law Fundamentals proves to be concise and authoritative.” – Jules Polonetsky, Future of Privacy Forum
FINAL PUBLISHED ARTICLES
Kafka in the Age of AI and the Futility of Privacy as Control
104 Boston University Law Review 1021 (2024) (with Woodrow Hartzog)
In this Essay, we argue that although Kafka starkly shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. The law can’t empower individuals when it is the system that renders them powerless. Ultimately, privacy law’s primary goal should not be to give individuals control over their data. Instead, the law should focus on ensuring a societal structure that brings the collection, use, and disclosure of personal data under control.
Murky Consent: An Approach to the Fictions of Consent in Privacy Law
104 B.U. L. Rev. 593 (2024)
In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious. Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid.
AI, Algorithms, and Awful Humans
92 Fordham L. Rev. 1923 (2024) (with Hideyuki Matsumi)
This Essay critiques arguments that algorithmic decision-making is better than human decision-making. Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex.
Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data
118 Nw. U. L. Rev. 1081 (2024)
Sensitive data is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. With Big Data and powerful machine learning algorithms, most nonsensitive data give rise to inferences about sensitive data. In many privacy laws, data giving rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws. This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake—they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. T
PAPER DRAFTS
The Great Scrape: The Clash Between Scraping and Privacy
(with Woodrow Hartzog)
In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law.
A Regulatory Roadmap to AI and Privacy
Although new AI laws can help, AI is making it glaringly clear that a privacy law rethink is long overdue. Understanding the privacy challenges posed by AI is essential. A comprehensive overview is necessary to evaluate the effectiveness of current laws, identify their limitations and decide what modifications or new measures are required for adequate regulation.
Against Privacy Essentialism
In this Essay, I respond to Maria Angel and Ryan Calo’s critique of my theory of privacy as a plurality of different yet related things.
Artificial Intelligence and Privacy
This Article aims to establish a foundational understanding of the intersection between artificial intelligence (AI) and privacy, outlining the current problems AI poses to privacy and suggesting potential directions for the law’s evolution in this area. Thus far, few commentators have explored the overall landscape of how AI and privacy interrelate. This Article seeks to map this territory. Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted. But AI glaringly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws. In this Article, I provide a roadmap to the key issues that the law must tackle and guidance about the approaches that can work and those that will fail.
* * *
Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.
Professor Solove’s Newsletter (free)
.
Sign up for Professor Solove’s Newsletter about his writings, whiteboards, cartoons, trainings, events, and more.