PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Webinar The Quantified Worker AI and Employment Blog

If you couldn’t make it to my recent webinar to discuss Ifeoma Ajunwa’s book, The Quantified Worker: Law and Technology in the Modern Workplace, you can watch the replay here. I had a great discussion with Ifeoma Ajunwa, Pauline Kim, and Matthew Bodie on the use of AI in hiring decisions and for other employment purposes.

Button Watch Webinar 02

Continue Reading

ABA Event on Privacy and Consent

ABA Webinar - Privacy and Consent

On Thursday, June 22 at 12pm EST, I’ll be speaking on a webinar hosted by the ABA on Privacy and the Ongoing Viability of Consent. I’ll be discussing the background and effectiveness of the consent model with Jessica Rich and Aryeh Friedman.

You can find more information about the event and how to register here.

Button Register

I will be speaking about my recent article Murky Consent: An Approach to the Fictions of Consent in Privacy Law.

Article - Solove - Murky Consent 03a

Download Button 01

Continue Reading

Video of My Children’s Book, THE EYEMONGER

Eyemonger Video Title Page

Here’s an animated video of my children’s book, The Eyemonger, that I narrated.

Cover Eyemonger Solove If you want the print version, click here to order the book on Amazon.

I also have free resources for parents and teachers to accompany the book.

Publisher’s Weekly writes that The Eyemonger is a “delightfully illustrated story concerned with issues of privacy. . . . Solove’s underlying theme and catchy rhymes sit perfectly on the cusp of children’s and middle-grade reading levels, and Beckwith’s eye-catching and brilliantly detailed illustrations will inspire young imaginations to soar.”

 

Continue Reading

Cartoon: AI Experimentation and Regulation

Cartoon AI Experimentation Law Innovation - TeachPrivacy Training 02

Here’s a new cartoon on artificial intelligence, experimentation, and regulation. Creators of new technology often extol the virtues of experimentation. When it comes to policymakers experimenting with legal regulation, I often hear a different tune from those creating new technology. But they are experimenting with our lives and well-being, with society and democracy. Law, too, is an experiment. We often don’t know what works until we try it.  So, while those developing new technologies experiment on society, is it so wrong for society to experiment in return?

Continue Reading

The Prediction Society: Algorithms and the Problems of Forecasting the Future

The Prediction Society - Algorithms and the Problems of Forecasting the Future 02

I am excited to share my new paper draft with Hideyuki (“Yuki”) Matsumi, The Prediction Society: Algorithms and the Problems of Forecasting the Future.  The paper is available for free on SSRN.

Download Article

Yuki is currently pursuing a PhD at Vrije Universiteit Brussel. Yuki began his career as a technologist, then turned to law, where he has been exploring predictive technologies for more than a decade. The origins of this article trace back to 2011, when Yuki was my student. I supervised Yuki’s thesis about predictive technologies. His work was way ahead of its time. I am beyond thrilled to join him now on exploring these issues. Writing this paper with Yuki has been a terrific experience, and I have learned a tremendous amount in working with him.

We aim to add a unique and important contribution to the discourse about AI, algorithms, and inferences by focusing specifically on predictions about the future. We argue that the law should recognize algorithmic predictions about the future as distinct from inferences about the past and present. Algorithmic predictions about the future present a special set of problems that aren’t addressed by the law. The law’s existing tools and rights are ill-suited for predictions. We examine in depth the issues the law must consider when addressing these problems.

I’m really happy about how the paper turned out, and I want to note that I played but a supporting role.  Yuki has been the driving force behind this paper.  I joined because I find the issues to be fascinating and of the utmost importance, and I believe we have something important to add to the discussion. We welcome feedback.

Continue Reading

LinkedIn Live Chat on AI and Privacy Harms

AL and Privacy Harms webinar

I chatted on LinkedIn Live about AI and Privacy Harms with Luiza Jarovsky about my article, Privacy Harms with Danielle Citron.  Luiza has a great newsletter called The Privacy Whisperer – definitely worth subscribing to.

You can read my Privacy Harms article here.

Here is the video of our chat.

Continue Reading

Event at Ohio State University – Protecting Privacy in the Age of AI

On Thursday, April 20, 2023, I will be speaking in an event at Ohio State University to discuss Protecting Privacy in the Age of AI: The Need for a Radical New Direction, with Margot Kaminski as commentator. I will be discussing the need for a radical new direction to protect privacy in today’s age of Big Data, algorithms, and AI. I will be drawing from my recently-published article, The Limitations of Privacy Rights, 98 Notre Dame Law Review 975 (2023).

You can find more information about the event here.

This is an in-person event with a Zoom option.

April 20, 2023, 12:15 pm – 1:15 pm
Ohio State University
Saxbe Auditorium

Button Register for Privacy+Security Forum

Continue Reading

The Limitations of Privacy Rights

Limitations of Privacy Rights - Daniel Solove 04

I have posted the final published version of my new article, The Limitations of Privacy Rights, 98 Notre Dame Law Review 975 (2023), on SSRN where it can be downloaded for free.  The article critiques the effectiveness of individual privacy rights generally, as well as specific privacy rights such as the rights to information, access, correction, erasure, objection, data portability, automated decisionmaking, and more.

Here’s the abstract:

Individual privacy rights are often at the heart of information privacy and data protection laws. The most comprehensive set of rights, from the European Union’s General Data Protection Regulation (GDPR), includes the right to access, right to rectification (correction), right to erasure, right to restriction, right to data portability, right to object, and right to not be subject to automated decisions. Privacy laws around the world include many of these rights in various forms.

In this article, I contend that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing. Rights can only give individuals a small amount of power. Ultimately, rights are at most capable of being a supporting actor, a small component of a much larger architecture. I advance three reasons why rights cannot serve as the bulwark of privacy protection. First, rights put too much onus on individuals when many privacy problems are systematic. Second, individuals lack the time and expertise to make difficult decisions about privacy, and rights cannot practically be exercised at scale with the number of organizations than process people’s data. Third, privacy cannot be protected by focusing solely on the atomistic individual. The personal data of many people is interrelated, and people’s decisions about their own data have implications for the privacy of other people.

The main goal of providing privacy rights aims to provide individuals with control over their personal data.  However, effective privacy protection involves not just facilitating individual control, but also bringing the collection, processing, and transfer of personal data under control. Privacy rights are not designed to achieve the latter goal; and they fail at the former goal.

After discussing these overarching reasons why rights are insufficient for the oversized role they currently play in privacy regulation, I discuss the common privacy rights and why each falls short of providing significant privacy protection. For each right, I propose broader structural measures that can achieve its underlying goals in a more systematic, rigorous, and less haphazard way.

Download Article

Continue Reading