PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

2023 Highlights - Scholarship 02

Here’s a roundup of my scholarship for 2023. With Professor Paul Schwartz, I published a new edition of my casebook, Information Privacy Law as well as new editions of the topical paperbacks (will be in print by the end of December).  One article came out in print, and I have several paper drafts in various stages of the publication process.  See below for details.

 

New Edition of Information Privacy Law Casebook

(Aspen 2024) (with Professor Paul Schwartz)Privacy Casebook

New Editions of Information Privacy Law
Topical Paperback Casebooks

(Aspen 2024) (with Professor Paul Schwartz)

Cover Information Privacy Law Paperbacks 02

FINAL PUBLISHED ARTICLES

The Limitations of Privacy Rights 

98 Notre Dame Law Review 975 (2023)

Limitations of Privacy Rights - Daniel Solove 04

Quote from the Article:

In this Article, I argue that although rights are an important component of privacy regulation, rights are often asked to do far more work than they are capable of doing.  Privacy rights can’t solve the problem of data disempowerment.  The ability of individuals to exercise control over their personal data is quite limited; there is a ceiling to individual control.  Rights can give people a small amount of power in a few isolated instances, but this power is too fragmented and haphazard to have a meaningful impact on protecting privacy.  Ultimately, rights are at most capable of being a supporting actor, a small component in a much larger architecture.

I advance three reasons why rights are quite limited as an effective way to protect privacy.  First, many rights are not practical for individuals to exercise.  Rights put too much of the onus on individuals to fight a war they can’t win.  Attempting to use privacy rights as a primary way to protect privacy is akin to arming an individual with a dagger to fight an entire army.  People can’t exercise their rights in the kind of systematic way necessary to have a meaningful impact.

Second, privacy rights involve “privacy self-management,” a term I have used to describe an approach to privacy that seeks to empower individuals to take control of their personal data. Unfortunately, people lack the expertise to make meaningful choices about their data.  These choices involve weighing the costs and benefits of allowing the collection, use, or transfer of their data.  Although the benefits are immediate and concrete, the costs involve risks that are more abstract and speculative.  Individuals lack the expertise to understand and assess the risks.  Even experts lack the knowledge about how the data will be used in the future and how algorithms will reach decisions regarding the data.

Third, privacy can’t be protected at the level of the atomistic individual.  Individuals make privacy choices that have effects not just for themselves but for many others.  For example, sharing one’s genetic data also shares the genetic data of one’s family members.  In today’s world of machine learning, the personal data of everyone in a data set has an impact on the decisions that the system makes.

PAPER DRAFTS

AI, Algorithms, and Awful Humans – Revised Version

forthcoming 92 Fordham Law Review (2024) (with Hideyuki Matsumi) 

Article - Solove Matsumi AI Algorithms Awful Humans 08

Quote from the Essay

In this Essay, we express skepticism about these arguments for decisions made about humans. Algorithms change the nature of decisions, shifting them toward quantifiable data and away from qualitative elements. Although this shift can bring benefits, there are also significant costs that are often underappreciated. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Machine decision-making currently can’t incorporate emotion, morality, or value judgments, which are essential components of decisions involving people’s welfare.  The increased use of automation in decisions can lead to changes in the weight given to certain factors over others or affect how conflicting goals are resolved—not necessarily in better ways. When machine and human decision-making are integrated, the focus of decisions can shift heavily to automated dimensions and neglect the moral issues involved.

We contend that algorithmic decision-making is being relied upon too eagerly and without sufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.

The Prediction Society: Algorithms and the Problems of Forecasting the Future

(with Hideyuki Matsumi)

The Prediction Society - Algorithms and the Problems of Forecasting the Future 02

Quote from the Paper:

We contend that algorithmic predictions raise at least four major problems that many laws concerning privacy, data protection, and anti-discrimination fail to address adequately. First, algorithmic predictions lead to what we call the “fossilization problem” because the predictions are based on data about the past. Decisions involving algorithmic predictions can reinforce patterns from past data and can further entrench discrimination, inequality, and privilege.

A second difficulty with algorithmic predictions is the “unfalsifiability problem.” Because future forecasting is about a probable but uncertain future (i.e., contingent and unvested), the matter asserted cannot be verified or falsified when predictions are made. Because the law allows individuals mainly to challenge inaccurate data, individuals lack the ability to meaningfully contest predictions because they exist in the twilight between truth and falsity.

Third, in what we call the “preemptive intervention problem,” when preemptive decisions or interventions are made based on future forecasting, the feedback loop to assess whether the predictions are accurate dissipates, which can reinforce potentially inaccurate future forecasting.

Fourth, algorithmic predictions can lead to a “self-fulfilling prophecy problem.” Predictions do not just forecast the future; they actively shape it. Decisions made based on algorithmic predictions can make them more likely to come true.

Murky Consent: An Approach to the Fictions of Consent in Privacy Law

Article - Solove - Murky Consent: An Approach to the Fictions of Consent in Privacy Law

Quote from the Paper:

Instead of trying in vain to turn consent from fiction to fact—to make it genuine, informed, and meaningful—the law should instead lean into the fiction. The law should embrace privacy consent in its murkiness.

Currently, privacy law has a binary view of consent—either there is consent or there isn’t. Recognizing murky consent creates a middle position between the binary poles of consent and non-consent. Most privacy consent is fraught with ambiguity and beset with problems; it is deeply problematic and unreliable. Rather than deny these deficiencies as some approaches do or try to repair them as other approaches do, the best approach is to accept and acknowledge them.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. This would allow for a degree of individual autonomy but with powerful guardrails to limit exploitative and harmful behavior by the organizations collecting and using personal data.

In this Article, I argue that most privacy consent should be considered to be murky consent and that this is the ideal form of consent for the law to recognize in most circumstances. Because murky consent lacks legitimacy, the law should reduce its power. The law can do so by making murky consent subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

Data Is What Data Does: Regulating Use, Harm, and Risk Instead of Sensitive Data

Quote from the Paper:

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive—as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake: they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But meaningful regulation cannot be determined solely by looking at the data itself. Data is what data does. Heightened protection of personal data should be based on the extent of harm or the risk of harm from its collection, use, or transfer.

Although it continues to rise in popularity, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, nonsensitive data can easily be used as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. The very notion that special categories of personal data can readily be demarcated fundamentally misunderstands how most personal data is interrelated and how algorithms and inferences work.

When nonsensitive data can give rise to inferences about sensitive data, many privacy laws correctly consider it to be sensitive data. Indeed, in our age of modern data analytics, it would be naïve to fail to account for inferences. The problem, however, is the rabbit hole goes all the way to Wonderland. In the age of Big Data, powerful machine learning algorithms facilitate inferences about sensitive data from nonsensitive data. As a result, nearly all personal data can be sensitive, and thus the sensitive data categories can swallow up everything. Oddly, the laws just seem to hum along as if this problem does not exist.

The implications of this point are significant. If nearly all data is sensitive data, then most organizations are violating the EU’s General Data Protection Regulation (GDPR) and many other privacy laws that have heightened protections for sensitive data.

* * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.

Professor Solove’s Newsletter (free)

Sign up for Professor Solove’s Newsletter about his writings, whiteboards, cartoons, trainings, events, and more.

Newsletter Sign Up Button