PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Webinar – Trust: What CEOs and Boards Must Know About Privacy and AI Blog

Dominique Shelton-Leipzig - Trust

In case you missed my recent webinar with Dominique Shelton-Leipzig (Mayer Brown), you can watch the replay here.  We had a great discussion about why privacy is an issue that the C-Suite and Board must address. Dominique is the author of a new book on this topic, Trust.: Responsible AI, Innovation, Privacy and Data Leadership.

Button Watch Webinar 02

Continue Reading

Cartoon: AI Bias

Cartoon AI Water - TeachPrivacy Training 02 small

Here’s a new cartoon on AI bias and the magical thinking that AI is unbiased because technology is neutral.  Bias comes from the data that algorithms use, so the bias often pollutes the output. I discuss the issue in some of my recent work, including:

There are many other terrific works that delve deeply into this issue. A few scholars whose works I have learned greatly from include Ifeoma Ajunwa, Jessica Eaglin, Sandra Mayson, Dan Burk, Safiya Noble, Solon Barocas, Andrew Selbst, Anupam Chander, Sonja Starr, Ngozi Okidegbe, Andrew Gunthrie Ferguson, Talia Gillis, Elizabeth Joh, Pauline Kim, Margot Kaminski, Kate Crawford, Aziz Huq, Oscar Gandy, and Mega Leta Jones. There are many others. So much excellent work is being written. I hope policymakers look at this scholarship because it is really good and also quite practical.

Continue Reading

Artificial Intelligence and Privacy

AI and Privacy 01

I’m delighted to post my new article draft, Artificial Intelligence and Privacy. The article aims to provide the conceptual and practical ground work for how to understand the relationship between AI and privacy as well as provide a roadmap for how privacy law should regulate AI.

Here’s the abstract:

This Article aims to establish a foundational understanding of the intersection between artificial intelligence (AI) and privacy, outlining the current problems AI poses to privacy and suggesting potential directions for the law’s evolution in this area. Thus far, few commentators have explored the overall landscape of how AI and privacy interrelate. This Article seeks to map this territory.

Some commentators question whether privacy law is appropriate for addressing AI. In this Article, I contend that although existing privacy law falls far short of addressing the privacy problems with AI, privacy law properly conceptualized and constituted would go a long way toward addressing them.

Privacy problems emerge with AI’s inputs and outputs. These privacy problems are often not new; they are variations of longstanding privacy problems. But AI remixes existing privacy problems in complex and unique ways. Some problems are blended together in ways that challenge existing regulatory frameworks. In many instances, AI exacerbates existing problems, often threatening to take them to unprecedented levels.

Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted. But AI glaringly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws.

Ultimately, whether through patches to old laws or as part of new laws, many issues must be addressed to address the privacy problems that AI is affecting. In this Article, I provide a roadmap to the key issues that the law must tackle and guidance about the approaches that can work and those that will fail.

You can download my article for free on SSRN here:

Continue Reading

Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data

Article - Solove - Data Is What Data Does - Sensitive Data 02

I’m delighted to share the final published version of my article, Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data, 118 Nw. U. L. Rev. 1081 (2024).

This article was selected for the Future of Privacy Forum’s Privacy Papers for Policymakers Award. The Award aims to “recognize leading U.S. and international privacy scholarship that is relevant to policymakers in the U.S. Congress, federal agencies, and international data protection authorities.”

You can download my article for free here:

Here’s the abstract:

Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation, sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, and biometric and genetic data.

Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use nonsensitive data as a proxy for certain types of sensitive data.

Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most nonsensitive data give rise to inferences about sensitive data. In many privacy laws, data giving rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.

This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake—they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does.

To be effective, privacy law must focus on harm and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be proportionate to the harm and risk involved with the data collection, use, and transfer.

 

Continue Reading

Cartoon: Internet Wolves

Cartoon Internet Wolves - TeachPrivacy Training 03

My new cartoon — a play on the famous New Yorker cartoon: “On the Internet, nobody knows you’re a dog.”

* * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. 

Professor Solove’s Privacy Cartoon Collection

More than 100 cartoons!

Cartoons - Privacy - Solove 01

PSA Courses Button 01

Artificial Intelligence Training Course

A basic introduction about AI for the workforce, ethical principles, and general guidance for following developing legal regulation

PSA Courses Button 01

Webinar – Privacy Law and the First Amendment Blog

In case you missed my webinar on Privacy Law and the First Amendment, you can watch the replay here.  I had a great discussion with Gautam Hans (Cornell Law) about several recent First Amendment cases that intersect with privacy law — the NetChoice cases.

Button Watch Webinar 02

Also, if you’re interested, I wrote a blog post about the CAADC case., NetChoice v. Bonta.

First Amendment Expansionism and California’s Age-Appropriate Design Code

Continue Reading

Kafka in the Age of AI and the Futility of Privacy as Control

Kafka in the Age of AI - an essay by Professors Daniel Solove and Woodrow Hartzog

I’m very pleased to post a draft of my forthcoming essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. (forthcoming 2024). It’s a short engaging read – just 20 pages!  We argue that although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.

You can download the article for free on SSRN. We welcome feedback.

Download Button 02 small

Scroll down for some excerpts from our PowerPoint presentation for this essay – images created by AI!

Continue Reading