PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Personal and Sensitive Data

Personal and Sensitive Data

NOTE: This post was originally part of my special newsletter on LinkedIn – Privacy+Tech Insights. This is a different newsletter from my weekly newsletter. My LinkedIn newsletters are more infrequent and typically involve a more focused analysis of a particular issue.  

quiet revolution has been going on with personal and sensitive data. There have been many notable developments. In the past few years, we’ve witnessed the triumph of the EU approach to defining personal data and to designating special protections for sensitive data.

We’ve seen a growing recognition in the law that:

  • the overwhelming modern consensus in privacy law is to define personal data as identified or identifiable data
  • new laws (post-GDPR) are now overwhelmingly recognizing sensitive data, even in the U.S.
  • various pieces of non-personal data can, in combination, be identifiable
  • the ability to make inferences about data can’t be ignored
  • non-sensitive data that gives rise to inferences about sensitive data counts as sensitive data

These are significant developments, yet oddly, they haven’t made headline news.

Continue Reading

AI, Algorithms, and Awful Humans

Article - Solove Matsumi AI Algorithms Awful Humans 02

I’m very excited to post my new short draft essay with Hideyuki (“Yuki”) Matsumi (Vrije Universiteit Brussel). The essay, which is a quick read (just 19 pages), is entitled: AI, Algorithms, and Awful Humansforthcoming 92 Fordham Law Review (2024). It will be part of a Fordham Law Review symposium, The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies (Nov. 3, 2023).

The essay argues that various arguments about human versus machine decision-making fail to account for several important considerations regarding how humans and machines decide. You can download the article for free on SSRN. We welcome feedback.

Download Button 02 small

Here’s the abstract:

A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process. Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. These arguments exert a powerful influence on law and policy.

In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic. We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated.

It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.

Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often don’t mix well. Humans often perform badly when reviewing algorithmic output.

We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.

Click the button to download the essay draft for free.

Download Button 02 small

* * * *

Continue Reading

Cartoon: Tech Companies, Innovation, and Regulation

Cartoon - Tech Companies and Regulation - TeachPrivacy 01b small

Here’s my new cartoon about how many tech companies extol innovation, yet seem to lose that innovative spirit when it comes to regulation. With the right incentives, it’s amazing how tech companies can rise to the challenge. They can certainly innovate to address regulatory demands; instead, they often send in lobbyists to pout and complain or to block laws. It would be better for companies to try to innovate for regulation rather than fight it.  Policymakers might look to use some carrots rather than just sticks. Positive incentives can help steer tech companies to address regulatory concerns.

Continue Reading

Cartoon: AI Apocalypse

Cartoon - AI Apocalypse - TeachPrivacy Training 03 small

Here’s a new cartoon on AI. On AI turning against us and killing us all, I have a prediction – and it’s both good and bad. The good: I don’t think AI will decide to kill us all. The bad: We will be the ones to decide. We’ll replace ourselves with machine parts and code until nothing human remains . . . that is, of course, if we don’t destroy our planet first.

Continue Reading

Webinar – GDPR Enforcement: A Conversation with Max Schrems Blog

In case you missed my discussion with Max Schrems, you can watch the replay here.  We discussed cross-border data transfer, litigation challenges and strategies, and potential reforms of the GDPR enforcement process.

Button Watch Webinar 02

Continue Reading

First Amendment Expansionism and California’s Age-Appropriate Design Code

First Amendment Expansionism 02

The recent district court decision in NetChoice v. Bonta (N.D. Cal., Sept. 18, 2023) holding that the California Age-Appropriate Design Code (CAADC) likely violates the First Amendment is a ridiculously expansive interpretation of the First Amendment, one that would annihilate most regulation if applied elsewhere.  This decision is one of a new breed of opinions that I will call “First Amendment expansionism,” which turn nearly everything in the universe into a free speech issue.  The Fifth Circuit recently held that the government’s encouraging platforms to take down misinformation and harmful content was a First Amendment violation because somehow it was unduly coercive . . . as if these platforms, which are some of the most powerful organizations the world has ever seen, will lack the courage to stand their ground whenever the government says “boo.” But I digress . . .

For example, according to the court, a DPIA implicates free speech because it “requires a business to express its ideas and analysis about likely harm.” The court argues:

It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

This reasoning could apply to any requirement that a business to document its policies and procedures or conduct risk analysis or have contracts with vendors. According to Judge Freeman, requirements to provide information about privacy practices is “requiring speech.” Requirements to estimate age “impede the ‘availability and use’ of information and accordingly to regulate speech.” According to Judge Freeman, nearly everything law might require can be recast in terms of requiring speech or affecting speech. For example, under Judge Freeman’s reasoning, data security requirements such as having policies or documenting processes would involve requiring speech. Doing a risk assessment would involve required speech.  Under this reasoning, it’s hard to imagine what wouldn’t be involve speech. Beyond privacy, much other regulation would implicate speech, such as required nutrition labels, product warnings, and mandatory disclosures.  One could argue that requirements to cooperate with regulators for inspections and investigations would involve speech — after all, these require that someone at a company communicate with regulators.

Continue Reading

Yale Law School Discussion of Murky Consent article

 

I’ll be speaking at Yale University on Tuesday, Oct 3 about my upcoming article, Murky Consent: An Approach to the Fictions of Consent in Privacy Law.  You can read the event description and add it to your calendar here.

 

Continue Reading

My Speech at EUROPOL on the Nothing to Hide Argument

On September 19, 2023, I am speaking at the European Union Agency for Law Enforcement Cooperation (EUROPOL) event, Whispers of Contrast (Madrid, Spain). The topic of my talk will be “Nothing to Hide – Nothing to Fear?” and will be based on my book, Nothing to Hide: The False Tradeoff Between Privacy and Security.  You can buy the book on Amazon, or download the complete electronic version for free on SSRN.

Continue Reading

Webinar – Facial Recognition and the Dubious Side of AI Blog

In case you missed my interview with New York Times reporter Kashmir Hill, you can watch the replay here.  We discussed her new book, Your Face Belongs to US: A Secretive Startup’s Quest to End Privacy as We Know It (Sept. 19, 2023).

Button Watch Webinar 02


Continue Reading