News, Developments, and Insights

high-tech technology background with eyes on computer display

AI’s Fishy Branding

AI Fishy Branding 01

One can learn a lot about AI from fish. The 1990s were a terrible time for the toothfish. An ugly fish inhabiting the deep seas, the toothfish (pictured above) was long considered a “trash fish,” undesirable to eat, a worthless catch.

The toothfish’s fate was fine until overfishing decimated the stocks of the long-popular fish, and the fishing industry looked for something more plentiful to make its way to people’s plates.

The toothfish was discovered. It was the perfect fit for the American palate – tasteless and bland, but not fishy, and with a smooth and buttery texture. But it needed something with pizazz to take off, and with most things these days, the magic was in the branding. The toothfish was rebranded “Chilean sea bass” – even though it isn’t a bass.  And the rest is history.  The fish became a popular menu item – a luxury one, a fish of distinction, a fish fit for sophisticated palates and big bucks. This happy story for the fishing industry is an unhappy one for the toothfish.  It’s now overfished and its numbers are dwindling.

The rebranding story is similar for other fish.  The slimehead was rebranded Orange Roughy.  Goosefish became Monkfish. Whore’s Egg was rebranded as Sea Urchin. Hog Fish became King mackerel.

AI Fishy Branding 02


AI as Branding

 The story of the toothfish reminds me a lot of the story of AI.  Technologies that have long been around have been rebranded as “AI” even though they technically are not AI.  And the rebrand has led to wild success.  Now, nearly everything with an algorithm is called “AI.” Mention the word “AI” and ears perk up, reporters swarm like insects to the light, and money pours in.

Eric Siegel proclaims that “A.I. is a big fat lie.” AI is “a hyped-up buzzword that confuses and deceives. . . . AI is nothing but a brand.” AI is not intelligent or sentient; what is referred to as AI today primarily involves a technology called machine learning, which hearkens all the way back to the 1940s.

Avoiding AI Exceptionalism:
AI Isn’t a Radical New Invention

As I’ve written in my article, AI and Privacy, AI is of critical importance that we avoid falling into what I call “AI exceptionalism” – treating AI as if it were a big break from the past. Instead, AI is a continuation of the past, a set of old technologies that have evolved and finally attracted the spotlight.

Why does it matter whether we see AI as old or new?  In my field of privacy law, it matters because if policymakers see AI is totally new, they might neglect revisiting old privacy laws. Policymakers might view AI as so new and different, that they’ll leave existing privacy laws behind.  As I wrote in the article:

Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted. But AI starkly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws.

AI’s rebrand brings both good and bad results. Fears of AI becoming sentient and killing us all might motivate policymakers to act, but these fears can distract from real problems occurring right now.

Turning back to fish rebranding, there are at least two important lessons to be learned for AI.

The Perils of Popularity

First, once something becomes hot, the result is a craze, and it is desired in undesirable ways. When a fish becomes famous, it is overfished and risks extinction. Fame isn’t a friend to fish.  Rarely do we embrace something in a balanced way – people either love it too much or too little.

AI is being embraced too hastily, in clunky and ill-fitting ways. People are rushing to sell AI to do anything and use AI to do anything, even when these AI tools are not optimized to do all these things. Ai these days is like a hammer, and people are trying to use it as a saw or screwdriver. They call these screwups “hallucinations,” but these errors are really trying to make AI do things it isn’t designed to do. Generative AI works best to generate content based on popularity, not authority, so it will make up details and sources.

How We Perceive AI Affects Its Power

Second, the power and spread of technology is not just about the technology itself. Some proponents of technology think that technology is itself the main engine for its proliferation and use, but we must not forget the importance of the framing and narratives around technology, which as a tremendous impact in how technology is used, how it becomes popular, and how it is integrated into society.  Again, consider fish. As an article in the Washington Post observes, “Today’s seafood is often yesterday’s trash fish and monsters.” Lobster evolved from a food fit for lower classes into an expensive luxury item.

Reframe something with a fancy name, and suddenly it goes from undesirable to indispensable. It is astounding the power that framing has on human perception, desire, and demand. As Alex Mayyasi writes:

[T]he line between bycatch and fancy seafood is not a great wall defended by the impregnability of taste, but a porous border susceptible to the effects of supply and demand, technology, and fickle trends. This is true of formerly low-class seafood like oysters and, most of all, the once humble lobster.

The story of technology is one told and shaped by people and institutions, who have incentives and intentions.  The power of AI emerges significantly from the way we perceive it.

* * * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 180 courses. 

Subscribe to Professor Solove’s Free Newsletter


Newsletter Professor Daniel Solove


EU AI Act Training

TeachPrivacy EU AI Act Training



The Great Scrape: The Clash Between Scraping and Privacy

The Great Scrape - The Clash Between Scraping and Privacy

I’m posting a new article draft with Professor Woodrow Hartzog (BU Law), The Great Scrape: The Clash Between Scraping and Privacy. We argue that “scraping” – the automated extraction of large amounts of data from the internet – is in fundamental tension with privacy. Scraping is generally anathema to the core principles of privacy that form the backbone of most privacy laws, frameworks, and codes.

You can download the article for free on SSRN.

Download Button 02 small

Here’s the abstract:

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.

Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law.  Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around these requirements are ignored.

Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.

This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation.

Click the button to download the essay draft for free.

Download Button 02 small

Continue Reading

Kafka in the Age of AI and the Futility of Privacy as Control

Kafka in the Age of AI - an essay by Professors Daniel Solove and Woodrow Hartzog

I’m posting the final published version of my essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. 1021 (2024). It’s a short engaging read – just 20 pages!  We argue that although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.

You can download the article for free on SSRN.

Download Button 02 small

Scroll down for some excerpts from our PowerPoint presentation for this essay – images created by AI!

Continue Reading

Against Privacy Essentialism – My Response to Angel and Calo’s Critique of My Theory of Privacy

Against Privacy Essentialism - Solove Response to Angel and Calo 03

What is “privacy”? The concept of privacy has been elusive to define, but I developed a theory for understanding privacy about 20 years ago. Maria Angel and Ryan Calo recently published a formidable critique of my theory of privacy:

Maria P. Angel and Ryan Calo, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Colum. L. Rev. 507 (2024)

Their arguments are thoughtful and worth reckoning with, and I have just written a lengthy response, which I have entitled Against Privacy Essentialism. As I wrote in the response:

At the outset, I want to note that I welcome Angel and Calo’s critique. I am thrilled that they have so thoroughly engaged with my work, and I see their essay as an invitation to revisit the issue about how to conceptualize privacy. I thus write this essay with gratitude at having this opportunity to put my theory to the test, nearly twenty years after I started developing it, and seeing how it holds up today. Along the way, I will address other critiques of my pluralistic taxonomic approach by Professors Jeffrey Bellin, Eric Goldman, and David Pozen.

Here’s the abstract:

In this essay, Daniel Solove responds to Maria Angel and Ryan Calo’s critique of his theory of privacy. In their article, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Columbia Law Review 507 (2024), Angel and Calo note that although “Solove’s taxonomic approach appears to have exerted an extraordinary influence on the shape and scope of contemporary privacy law scholarship,” this approach “fails to provide a useful framework for determining what constitutes a privacy problem and, as a consequence, has begun to disserve the community.”

Solove argues that Angel and Calo wrongly view Solove’s conception of privacy as boundaryless and arbitrary, want the term “privacy” to do work it is not capable of, fail to show how the approach leads to bad consequences, and propose alternative approaches to conceptualizing privacy that fare worse on their own grounds of critique.

For the most part, Angel and Calo and several other critics of his theory view privacy in an essentialist manner. Their privacy essentialism involves their commitment to understanding privacy as having clear boundaries to demarcate it from other concepts and a definitive definition with a proper authoritative foundation.  

In this essay, Solove argues against privacy essentialism. This way of thinking unproductively narrows thought, creates silos, leads to the overly narrow or overly broad failed attempts at conceptualizing privacy, stunts the development of the field, and results in constricted and impoverished policymaking.  Solove argues that privacy essentialism leads to a dead end, and it merely provides the illusion of certainty and clarity.

For those of you interested in reading about my theory of privacy, it is developed in several works. The book is the most recent and complete version of the theory, and it incorporates, updates, and adds to the articles:

Continue Reading

Murky Consent: An Approach to the Fictions of Consent in Privacy Law – FINAL VERSION

Article - Murky Consent - Fictions of Privacy Consent - Solove 01

I’m delighted to share the newly-published final version of my article:

Murky Consent: An Approach to the Fictions of Consent in Privacy Law
104 B.U. L. Rev. 593 (2024)

I’ve been pondering privacy consent for more than a decade, and I think I finally made a breakthrough with this article.  I welcome feedback and hope you enjoy the piece.

Mini Abstract:

In this Article I argue that most of the time, privacy consent is fictitious. Instead of futile efforts to try to turn privacy consent from fiction to fact, the better approach is to lean into the fictions. The law can’t stop privacy consent from being a fairy tale, but the law can ensure that the story ends well. I argue that privacy consent should confer less legitimacy and power and that it be backstopped by a set of duties on organizations that process personal data based on consent.

Also check out Prof Stacy-Ann Elvy’s insightful response piece, Privacy Law’s Consent Conundrum. Additionally, here’s a video of me presenting the article.

Full Abstract:

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

Continue Reading

A Regulatory Roadmap to AI and Privacy

Solove - Infographic- AI and PrivacyOver at IAPP News, I wrote a short essay called A Regulatory Roadmap to AI and Privacy.  It summarizes my longer article, Artificial Intelligence and Privacy. For those who want the short 2,000 word version of my thoughts on AI and privacy, read the short essay. For those who want more detail, then read the full article.

I created an infographic to capture the issues, but I couldn’t include it in the IAPP piece, so I’ll include it here (see above).

For easier reading and printing, here’s a PDF version of the short essay with the infographic included. Or you can check out my essay at IAPP. The long article is here.

From the short essay:

Although new AI laws can help, AI is making it glaringly clear that a privacy law rethink is long overdue. . . .

Understanding the privacy challenges posed by AI is essential. A comprehensive overview is necessary to evaluate the effectiveness of current laws, identify their limitations and decide what modifications or new measures are required for adequate regulation.

Button Read 01

Continue Reading

Webinar – Another Privacy Bill on Capitol Hill: The American Privacy Rights Act Blog

In case you missed my recent webinar with Laura Riposo VanDruff and Jules Polonetsky, you can watch the replay here.   We discussed the strengths and weaknesses of the American Privacy Rights Act (APRA) and its likelihood of passing.

Button Watch Webinar 02

Continue Reading

AI, Algorithms, and Awful Humans – Final Published Version

Article - Solove Matsumi AI Algorithms Awful Humans 09

I am pleased to share the final published version of my short essay with Yuki Matsumi. It was written for a symposium in Fordham Law Review.

AI, Algorithms, and Awful Humans
92 Fordham L. Rev. 1923 (2024)

Mini Abstract:

This Essay critiques arguments that algorithmic decision-making is better than human decision-making. Two arguments are often advanced to justify the increasing use of algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. We argue that such contentions are far too optimistic and fail to appreciate the shortcomings of machine decisions and the difficulties in combining human and machine decision-making. Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often do not mix well. Humans often perform badly when reviewing algorithmic output.

Download the piece for free here:

Article - Solove Matsumi AI Algorithms Awful Humans 10

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03


Cover - Privacy Law Fundamentals 06

HOT OFF THE PRESS!  Privacy Law Fundamentals, Seventh Edition (2024).  This is my short guide to privacy law with Professor Paul Schwartz (Berkeley Law).

Believe it or not, there have been some new developments in privacy law . . .

“This book is an indispensable guide for privacy and data protection practitioners, students, and scholars. You will find yourself consulting it regularly, as I do. It is a must for your bookshelf” – Danielle Citron, University of Virginia Law School

“Two giants of privacy scholarship succeed in distilling their legal expertise into an essential guide for a broad range of the legal community. Whether used to learn the basics or for quick reference, Privacy Law Fundamentals proves to be concise and authoritative.” – Jules Polonetsky, Future of Privacy Forum

Button Learn More 01

If you’re interested in the digital edition, click here.

Cover - Privacy Law Fundamentals Digital

Button Learn More 01

Continue Reading