News, Developments, and Insights

high-tech technology background with eyes on computer display

Cartoon – Halloween AI Algorithm Training

Cartoon - Halloween AI Algorithm - TeachPrivacy Training 02 small

Here’s my latest cartoon – for Halloween. This cartoon is inspired by many companies now starting to use their users’ data to train their AI algorithms.  Recently, Elon Musk’s X (formerly Twitter) changed its privacy notice to indicate that it would start using user data for AI training. As the famous saying goes, “If you’re not paying for the product, you are the product.”

Some other Halloween cartoons:

Privacy Law Frankenstein

Cartoon Frankenstein Privacy Law - TeachPrivacy Training 05 small

Big Data Halloween 

Cartoon Big Data Halloween - TeachPrivacy Privacy Training 02 small

Continue Reading

BU Law Review Symposium on Privacy Law

I will be speaking on November 3, 2023 at a Boston University Law Review symposium: Information Privacy Law at the Crossroads. From the symposium description:

This symposium aims to gather leading privacy scholars to examine the current state of privacy law and theory and explore its direction. With the introduction of the first bipartisan omnibus bill in Congress in a decade, President Biden calling for better privacy legislation, and states enacting a flurry of new privacy laws, it is an excellent time to revisit privacy law’s commitments and map its future in a world where people are exposed and exploited like never before.

The symposium will ultimately be in a published volume of the Boston University Law Review.  I’m writing the introduction with Professor Woodrow Hartzog (BU Law School), and our essay is titled, Kafka in the Age of AI and the Futility of Privacy as Control. Stay tuned, as we’ll be posting our draft on SSRN soon.

There is an amazing lineup of speakers at the symposium. They include:

  • Alexis Shore (Boston University)
  • Daniel Solove (George Washington University)
  • Neil Richards (Washington University in St. Louis)
  • Maria Angel (University of Washington, School of Law)
  • Salome Viljoen (University of Michigan Law School)
  • Christopher Robertson (Boston University School of Law)
  • Meg Jones (Georgetown University)
  • Ngozi Okidegbe (Boston University School of Law)
  • Paul Schwartz (University of California Berkeley)
  • Helen Nissenbaum (Cornell Tech)
  • Julie Dahlstrom (Boston University School of Law)
  • Anita Allen (University of Pennsylvania, Carey Law School)
  • Khiara M. Bridges (University of California Berkeley, School of Law)
  • Scott Skinner-Thompson (University of Colorado)
  • Ari Waldman (University of California Irvine, Law)
  • Claudia Haupt (Northeastern University, School of Law)
  • Danielle Citron (University of Virginia, School of Law)
  • Margot Kaminski (University of Colorado, School of Law)
  • Jasmine McNealy (University of Florida)
  • Zahra Takhshid (University of Denver, Strum College of Law)
  • Rory Van Loo (Boston University School of Law)
  • Pauline Kim (Washington University in St. Louis, School of Law)
  • Paul Ohm (Georgetown University Law Center)
  • William McGeveran (University of Minnesota, Law School)
  • Chris Gilliard (Just Tech Fellow at the Social Science Research Council)

More information about the event is here.

Continue Reading

Personal and Sensitive Data

Personal and Sensitive Data

NOTE: This post was originally part of my special newsletter on LinkedIn – Privacy+Tech Insights. This is a different newsletter from my weekly newsletter. My LinkedIn newsletters are more infrequent and typically involve a more focused analysis of a particular issue.  

quiet revolution has been going on with personal and sensitive data. There have been many notable developments. In the past few years, we’ve witnessed the triumph of the EU approach to defining personal data and to designating special protections for sensitive data.

We’ve seen a growing recognition in the law that:

  • the overwhelming modern consensus in privacy law is to define personal data as identified or identifiable data
  • new laws (post-GDPR) are now overwhelmingly recognizing sensitive data, even in the U.S.
  • various pieces of non-personal data can, in combination, be identifiable
  • the ability to make inferences about data can’t be ignored
  • non-sensitive data that gives rise to inferences about sensitive data counts as sensitive data

These are significant developments, yet oddly, they haven’t made headline news.

Continue Reading

AI, Algorithms, and Awful Humans

Article - Solove Matsumi AI Algorithms Awful Humans 02

I’m very excited to post my new short draft essay with Hideyuki (“Yuki”) Matsumi (Vrije Universiteit Brussel). The essay, which is a quick read (just 19 pages), is entitled: AI, Algorithms, and Awful Humansforthcoming 92 Fordham Law Review (2024). It will be part of a Fordham Law Review symposium, The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies (Nov. 3, 2023).

The essay argues that various arguments about human versus machine decision-making fail to account for several important considerations regarding how humans and machines decide. You can download the article for free on SSRN. We welcome feedback.

Download Button 02 small

Here’s the abstract:

A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process. Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. These arguments exert a powerful influence on law and policy.

In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic. We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated.

It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.

Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often don’t mix well. Humans often perform badly when reviewing algorithmic output.

We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.

Click the button to download the essay draft for free.

Download Button 02 small

* * * *

Continue Reading

Cartoon: Tech Companies, Innovation, and Regulation

Cartoon - Tech Companies and Regulation - TeachPrivacy 01b small

Here’s my new cartoon about how many tech companies extol innovation, yet seem to lose that innovative spirit when it comes to regulation. With the right incentives, it’s amazing how tech companies can rise to the challenge. They can certainly innovate to address regulatory demands; instead, they often send in lobbyists to pout and complain or to block laws. It would be better for companies to try to innovate for regulation rather than fight it.  Policymakers might look to use some carrots rather than just sticks. Positive incentives can help steer tech companies to address regulatory concerns.

Continue Reading

Cartoon: AI Apocalypse

Cartoon - AI Apocalypse - TeachPrivacy Training 03 small

Here’s a new cartoon on AI. On AI turning against us and killing us all, I have a prediction – and it’s both good and bad. The good: I don’t think AI will decide to kill us all. The bad: We will be the ones to decide. We’ll replace ourselves with machine parts and code until nothing human remains . . . that is, of course, if we don’t destroy our planet first.

Continue Reading

Webinar – GDPR Enforcement: A Conversation with Max Schrems Blog

In case you missed my discussion with Max Schrems, you can watch the replay here.  We discussed cross-border data transfer, litigation challenges and strategies, and potential reforms of the GDPR enforcement process.

Button Watch Webinar 02

Continue Reading

First Amendment Expansionism and California’s Age-Appropriate Design Code

First Amendment Expansionism 02

The recent district court decision in NetChoice v. Bonta (N.D. Cal., Sept. 18, 2023) holding that the California Age-Appropriate Design Code (CAADC) likely violates the First Amendment is a ridiculously expansive interpretation of the First Amendment, one that would annihilate most regulation if applied elsewhere.  This decision is one of a new breed of opinions that I will call “First Amendment expansionism,” which turn nearly everything in the universe into a free speech issue.  The Fifth Circuit recently held that the government’s encouraging platforms to take down misinformation and harmful content was a First Amendment violation because somehow it was unduly coercive . . . as if these platforms, which are some of the most powerful organizations the world has ever seen, will lack the courage to stand their ground whenever the government says “boo.” But I digress . . .

For example, according to the court, a DPIA implicates free speech because it “requires a business to express its ideas and analysis about likely harm.” The court argues:

It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

This reasoning could apply to any requirement that a business to document its policies and procedures or conduct risk analysis or have contracts with vendors. According to Judge Freeman, requirements to provide information about privacy practices is “requiring speech.” Requirements to estimate age “impede the ‘availability and use’ of information and accordingly to regulate speech.” According to Judge Freeman, nearly everything law might require can be recast in terms of requiring speech or affecting speech. For example, under Judge Freeman’s reasoning, data security requirements such as having policies or documenting processes would involve requiring speech. Doing a risk assessment would involve required speech.  Under this reasoning, it’s hard to imagine what wouldn’t be involve speech. Beyond privacy, much other regulation would implicate speech, such as required nutrition labels, product warnings, and mandatory disclosures.  One could argue that requirements to cooperate with regulators for inspections and investigations would involve speech — after all, these require that someone at a company communicate with regulators.

Continue Reading

Yale Law School Discussion of Murky Consent article


I’ll be speaking at Yale University on Tuesday, Oct 3 about my upcoming article, Murky Consent: An Approach to the Fictions of Consent in Privacy Law.  You can read the event description and add it to your calendar here.


Continue Reading