PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Cartoon: AI Restaurant

Cartoon AI Restaurant - TeachPrivacy Training 01

My latest cartoon – about the AI craze these days.

Want More Cartoons?

Subscribe to Solove’s Free Newsletter

Button - Subscribe

* * * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.

Prof. Solove’s Privacy Training: 150+ Courses

TeachPrivacy Privacy Awareness Training 03a

Button Learn More 01

 

Bankruptcy Sale of DNA Data: From Toysmart to 23andMe

DNA 23andMe

A recent article in The Atlantic discusses the risk of 23andMe selling its vast stockpile of DNA data on 15 million individuals:

23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company’s CEO. Amid this downward spiral, Wojcicki has said she’ll consider selling 23andMe—which means the DNA of 23andMe’s 15 million customers would be up for sale, too.

Can anything be done to protect this DNA data in the event of a sale?

More than two decades ago, the FTC intervened in a bankruptcy sale of personal data by Toysmart, an online toy merchant that had massive quantities of children’s data. The FTC limited Toysmart’s ability to sell its data only to companies operating in a similar market and agreeing to abide by the same privacy policies as Toysmart had in place. But the Toysmart case was a “deception” case under the FTC Act, triggered by the fact that the company had stated in its privacy notice that it would not share the personal data of its customers to third parties.

The lesson companies learned from Toysmart is to include the sale of data as an asset in a potential bankruptcy.  This makes a deception case difficult or impossible to bring.  23andMe has done this, writing the following in its privacy notice:

If we are involved in a bankruptcy, merger, acquisition, reorganization, or sale of assets, your Personal Information may be accessed, sold or transferred as part of that transaction and this Privacy Statement will apply to your Personal Information as transferred to the new entity.

The failure of the notice-and-choice approach is about as established as the law of gravity. Nobody reads privacy notices. Meaningful consent can’t be inferred from customer inaction. The existence of a notice alone provides no indicia of consumer consent whatsoever.

Continue Reading

My Forthcoming Book, ON PRIVACY AND TECHNOLOGY, Available for Pre-Order

On Privacy and Technology - Solove 04

I am excited to announce that my forthcoming book, ON PRIVACY AND TECHNOLOGY (Oxford University Press) is now available for pre-order. It will be in print in January 2025.

From the book jacket:

Succinct and eloquent, On Privacy and Technology is an essential primer on how to face the threats to privacy in today’s age of digital technologies and AI.

With the rapid rise of new digital technologies and artificial intelligence, is privacy dead? Can anything be done to save us from a dystopian world without privacy?

In this short and accessible book, internationally renowned privacy expert Daniel J. Solove draws from a range of fields, from law to philosophy to the humanities, to illustrate the profound changes technology is wreaking upon our privacy, why they matter, and what can be done about them. Solove provides incisive examinations of key concepts in the digital sphere, including control, manipulation, harm, automation, reputation, consent, prediction, inference, and many others.

Compelling and passionate, On Privacy and Technology teems with powerful insights that will transform the way you think about privacy and technology.

Click here to pre-order the book.

Book Details:

ON PRIVACY AND TECHNOLOGY
by Daniel J. Solove
Oxford University Press (Jan. 2025)
ISBN 978-0197771686 

Button Pre-Order the Book

Continue Reading

The Limits of the CDA Section 230: Accountability for Algorithmic Decisions

Social Media CDA 230 Algorithms

The U.S. Court of Appeals for the Third Circuit just handed down a very important decision on the Communications Decency Act (CDA) Section 230 and accountability for algorithmic decisions. In Anderson v. TikTok (3rd Cir. Aug, 27, 2024), the Third Circuit held that there are limits to the broad immunity under the CDA Section 230. As I’ve long argued, going back to my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (2007) (free download here) the statute has been interpreted by courts in an overzealous way, far beyond its original intent.  Finally, a court is pushing back and holding social media companies accountable for the harms they create. In particular, the concurring opinion by Judge Matey is worth reading in full; it is powerful and persuasive.

The facts of the case are tragic. Videos on TikTok encouraged viewers to engage in the “Blackout Challenge” — to choke themselves with various things until they passed out. TikTok’s algorithm recommended a Blackout Challenge video to Nylah Anderson, a 10-year old girl, on her “For You Page.”  She tried out the conduct in the video and died.  Anderson’s estate sued TikTok for recommending the video to Nylah, and TikTok argued that the CDA Section 230 immunized it because the video was content from another user.

The CDA Section 230, at 47 U.S.C. § 230(c)(1), states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

CDA 230 02The CDA Section 230 was written to protect online platforms from being held responsible as a “publisher or speaker” for content posted by users. But starting with Zeran v. AOL, 129 F. 3d 327 (4th Cir. 1997), courts expanded and twisted the CDA Section 230 into a much broader immunity for nearly anything that happens on online platforms. The original goal of the law was to protect online platforms from being held to the same level of liability as anyone speaking on the platforms even when these platforms were just acting like a passive bulletin board.  But these days, online platforms are not bulletin boards. They are active manipulators of content, where algorithms act as puppeteers pulling the strings behind the scenes to influence what users see and how they interact. Platforms don’t just passively serve as places where content is posted; instead, they actively shape the content viewers see with algorithms programmed to deliver content to spark engagement with users. Content is curated and promoted.

The Third Circuit finally had enough of the overly-expansive interpretations of the CDA 230 and held: “ICSs [interactive computer services] are immunized only if they are sued for someone else’s expressive activity or content (i.e., third-party speech), but they are not immunized if they are sued for their own expressive activity or content (i.e., first-party speech).”

The court reasoned that the U.S. Supreme Court’s decision in Moody v. NetChoice, 144 S. Ct. 2383 (U.S. 2024) concluded that social media platforms are engaging in speech through their content moderation decisions. As the Third Circuit explained the Supreme Court’s holding: “The Court held that a platform’s algorithm that reflects ‘editorial judgments’ about ‘compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment.”

Indeed, in recent cases, NetChoice, an organization created by social media companies to aggressively litigate against attempts to regulate them, has argued that social media companies are speaking when they are engaging in content moderation and are entitled to First Amendment protection. But then, NetChoice turns around and argues in Section 230 cases that such companies are not speaking when they are engaging in content moderation.  NetChoice wants it both ways — for social media companies to be speakers when the law protects them but not when the law holds them accountable for their speech.

The Third Circuit further stated: “Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230, too.”  Applying this conclusion to Anderson’s argument, the court held: “Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own ‘expressive activity,’ and thus its first-party speech. Such first-party speech is the basis for Anderson’s claims.”

In a concurring opinion, Judge Matey indicated he would go further to push back on the overly-expansive interpretations of the CDA Section 230.  His opinion is powerful and eloquent, and I will provide some key quotations from it below:

Ten-year-old Nylah Anderson died after attempting to recreate the “Blackout Challenge” she watched on TikTok. The Blackout Challenge—performed in videos widely circulated on TikTok—involved individuals “chok[ing] themselves with belts, purse strings, or anything similar until passing out.” App. 31.3 The videos “encourage[d]” viewers to record themselves doing the same and post their videos for other TikTok users to watch. App. 31. Nylah, still in the first year of her adolescence, likely had no idea what she was doing or that following along with the images on her screen would kill her. But TikTok knew that Nylah would watch because the company’s customized algorithm placed the videos on her “For You Page”4 after it “determined that the Blackout Challenge was ‘tailored’ and ‘likely to be of interest’ to Nylah.” App. 31.

No one claims the videos Nylah viewed were created by TikTok; all agree they were produced and posted by other TikTok subscribers. But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah. . . .

Today, § 230 rides in to rescue corporations from virtually any claim loosely related to content posted by a third party, no matter the cause of action and whatever the provider’s actions. See, e.g., Gonzalez v. Google LLC, 2 F.4th 871, 892–98 (9th Cir. 2021), vacated, 598 U.S. 617 (2023); Force, 934 F.3d at 65–71. The result is a § 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm.

But this conception of § 230 immunity departs from the best ordinary meaning of the text and ignores the context of congressional action. Section 230 was passed to address an old problem arising in a then-unique context, not to “create a lawless no-man’s-land” of legal liability. . . .

Properly read, § 230(c)(1) says nothing about a provider’s own conduct beyond mere hosting. A conclusion confirmed by § 230(c)(2), which enumerates acts that platforms can take without worrying about liability. . .

Judge Matey’s opinion also discusses the legislative history of Section 230 extensively, and he reaches the right conclusion about the proper interpretation and scope of Section 230.  The Third Circuit majority focuses on TikTok’s responsibility for its algorithmic decisions, and Judge Matey concurs with this holding but would go further to hold that Section 230 does not immunize TikTok from distributor liability:

§ 230(c)(1)’s preemption of traditional publisher liability precludes Anderson from holding TikTok liable for the Blackout Challenge videos’ mere presence on TikTok’s platform. A conclusion Anderson’s counsel all but concedes. But § 230(c)(1) does not preempt distributor liability, so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed. So too for her claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.

I am glad that finally online platforms are being held accountable for their actions. For too long, they’ve wanted to have it both ways — to use algorithms to determine which content users are exposed to (and to be protected as “speakers” under the First Amendment) but then to be held immune because they are not “speakers” under the CDA Section 230. For too long, they’ve escaped accountability for their actions. Platforms are not passive conduits; they are far from bulletin boards. Their algorithms curate, promote, and downgrade content. These algorithms affect what content users see, and they also affect the content users create, because users create content in response to what the algorithms encourage and promote.  It’s high time for companies to be held responsible for what they are doing.

For more on the CDA Section 230 and its interpretation, see my blog post: Restoring the CDA Section 230 to What It Actually Says.  I will also note my appreciation to Judge Matey for citing this post in his opinion.

Also see Danielle Citron, How to Fix Section 230 and Mary Anne Franks, Reforming Section 230 and Platform Liability.

H/T Bob Sullivan and Zephyr Teachout

Daniel J. Solove is John Marshall Harlan Research Professor of Law at George Washington University Law School. He wrote about online social media, privacy, and free speech in his book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (Yale University Press 2007). The full book is now available as a free download on SSRN. Although the main players have changed, the book is still quite relevant today. The writing was on the wall 17 years ago.   

Future of Reputation Cover 01

U.S. State Privacy Laws – A Lack of Imagination

State Privacy Laws

The U.S. lacks a federal comprehensive privacy law, but the states have sprung into action by passing broadly-applicable consumer privacy laws.  Nearly 20 states have passed such laws – so about 40% of the states now have privacy laws.

Are these laws any good?

Short answer: No

But I am glad they exist.  Well, sort of. . .

In my view, most of the state laws are rather weak, and use primarily a rights-based approach that doesn’t work, unfortunately. I’ve written extensively about how rights, consent, and the overall approach to rely on individual privacy self-management really doesn’t work and can’t work.

Ultimately, while I applaud the sentiment about the states passing privacy laws, I don’t think most really move the needle on privacy.  I wouldn’t tell anyone: “Yep, your state just passed this privacy law so you can now rest easy.  Your privacy is safeguarded!  No more worries. Problem solved!”  Instead, people will get a bunch of rights they won’t use, and the law will be papering over the problem. The problem won’t improve, and eventually, people will realize they’ve been fed a placebo.

So, I like the fact that states are passing privacy laws, just like I appreciated my grandma when she gave me crap she bought on some home TV shopping show.  She meant well. The stuff was crap, but it was the thought that counts.

Many of the state privacy laws are cut-and-paste jobs. The original law as the California Consumer Privacy Act (CCPA) of 2018, later strengthened by a referendum. Perhaps a bit delirious from their giddy excitement over the CCPA, some commentators even called the law the GDPR of the U.S. But when sobered up, people realized that the law, while adopting some GDPR elements like its definition of personal data, sensitive data, data protection impact assessments, a right to delete, a right to data portability, and a few other things, is ultimately far narrower and weaker than the GDPR.  Most U.S. laws lack the lawful basis approach; they are warmed-over versions of the notice-and-choice approach which has long been discredited. Nobody plausibly can defend notice-and-choice, but it persists anyway. The CCPA is obsessed with data transfer and fails to do much to address data use by the original collectors of the data. It relies far too heavily on privacy self-management and gives people rights that are largely empty and impractical to use at scale.  Unlike the GDPR, it exempts smaller businesses (which can be quite large) and it has exceptions for publicly-available data. But it is at least a law spiced up with a little GDPR dust and terminology.

Continue Reading

AI’s Fishy Branding

AI Fishy Branding 01

One can learn a lot about AI from fish. The 1990s were a terrible time for the toothfish. An ugly fish inhabiting the deep seas, the toothfish (pictured above) was long considered a “trash fish,” undesirable to eat, a worthless catch.

The toothfish’s fate was fine until overfishing decimated the stocks of the long-popular fish, and the fishing industry looked for something more plentiful to make its way to people’s plates.

The toothfish was discovered. It was the perfect fit for the American palate – tasteless and bland, but not fishy, and with a smooth and buttery texture. But it needed something with pizazz to take off, and with most things these days, the magic was in the branding. The toothfish was rebranded “Chilean sea bass” – even though it isn’t a bass.  And the rest is history.  The fish became a popular menu item – a luxury one, a fish of distinction, a fish fit for sophisticated palates and big bucks. This happy story for the fishing industry is an unhappy one for the toothfish.  It’s now overfished and its numbers are dwindling.

The rebranding story is similar for other fish.  The slimehead was rebranded Orange Roughy.  Goosefish became Monkfish. Whore’s Egg was rebranded as Sea Urchin. Hog Fish became King mackerel.

AI Fishy Branding 02

 

AI as Branding

 The story of the toothfish reminds me a lot of the story of AI.  Technologies that have long been around have been rebranded as “AI” even though they technically are not AI.  And the rebrand has led to wild success.  Now, nearly everything with an algorithm is called “AI.” Mention the word “AI” and ears perk up, reporters swarm like insects to the light, and money pours in.

Eric Siegel proclaims that “A.I. is a big fat lie.” AI is “a hyped-up buzzword that confuses and deceives. . . . AI is nothing but a brand.” AI is not intelligent or sentient; what is referred to as AI today primarily involves a technology called machine learning, which hearkens all the way back to the 1940s.

Avoiding AI Exceptionalism:
AI Isn’t a Radical New Invention

As I’ve written in my article, AI and Privacy, AI is of critical importance that we avoid falling into what I call “AI exceptionalism” – treating AI as if it were a big break from the past. Instead, AI is a continuation of the past, a set of old technologies that have evolved and finally attracted the spotlight.

Why does it matter whether we see AI as old or new?  In my field of privacy law, it matters because if policymakers see AI is totally new, they might neglect revisiting old privacy laws. Policymakers might view AI as so new and different, that they’ll leave existing privacy laws behind.  As I wrote in the article:

Overall, AI is not an unexpected upheaval for privacy; it is, in many ways, the future that has long been predicted. But AI starkly exposes the longstanding shortcomings, infirmities, and wrong approaches of existing privacy laws.

AI’s rebrand brings both good and bad results. Fears of AI becoming sentient and killing us all might motivate policymakers to act, but these fears can distract from real problems occurring right now.

Turning back to fish rebranding, there are at least two important lessons to be learned for AI.

The Perils of Popularity

First, once something becomes hot, the result is a craze, and it is desired in undesirable ways. When a fish becomes famous, it is overfished and risks extinction. Fame isn’t a friend to fish.  Rarely do we embrace something in a balanced way – people either love it too much or too little.

AI is being embraced too hastily, in clunky and ill-fitting ways. People are rushing to sell AI to do anything and use AI to do anything, even when these AI tools are not optimized to do all these things. Ai these days is like a hammer, and people are trying to use it as a saw or screwdriver. They call these screwups “hallucinations,” but these errors are really trying to make AI do things it isn’t designed to do. Generative AI works best to generate content based on popularity, not authority, so it will make up details and sources.

How We Perceive AI Affects Its Power

Second, the power and spread of technology is not just about the technology itself. Some proponents of technology think that technology is itself the main engine for its proliferation and use, but we must not forget the importance of the framing and narratives around technology, which as a tremendous impact in how technology is used, how it becomes popular, and how it is integrated into society.  Again, consider fish. As an article in the Washington Post observes, “Today’s seafood is often yesterday’s trash fish and monsters.” Lobster evolved from a food fit for lower classes into an expensive luxury item.

Reframe something with a fancy name, and suddenly it goes from undesirable to indispensable. It is astounding the power that framing has on human perception, desire, and demand. As Alex Mayyasi writes:

[T]he line between bycatch and fancy seafood is not a great wall defended by the impregnability of taste, but a porous border susceptible to the effects of supply and demand, technology, and fickle trends. This is true of formerly low-class seafood like oysters and, most of all, the once humble lobster.

The story of technology is one told and shaped by people and institutions, who have incentives and intentions.  The power of AI emerges significantly from the way we perceive it.

* * * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 180 courses. 

Subscribe to Professor Solove’s Free Newsletter

 

Newsletter Professor Daniel Solove

 

EU AI Act Training

TeachPrivacy EU AI Act Training

 

 

The Great Scrape: The Clash Between Scraping and Privacy

The Great Scrape - The Clash Between Scraping and Privacy

I’m posting a new article draft with Professor Woodrow Hartzog (BU Law), The Great Scrape: The Clash Between Scraping and Privacy. We argue that “scraping” – the automated extraction of large amounts of data from the internet – is in fundamental tension with privacy. Scraping is generally anathema to the core principles of privacy that form the backbone of most privacy laws, frameworks, and codes.

You can download the article for free on SSRN.

Download Button 02 small

Here’s the abstract:

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.

Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law.  Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around these requirements are ignored.

Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.

This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation.

Click the button to download the essay draft for free.

Download Button 02 small

Continue Reading

Kafka in the Age of AI and the Futility of Privacy as Control

Kafka in the Age of AI - an essay by Professors Daniel Solove and Woodrow Hartzog

I’m posting the final published version of my essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. 1021 (2024). It’s a short engaging read – just 20 pages!  We argue that although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.

You can download the article for free on SSRN.

Download Button 02 small

Scroll down for some excerpts from our PowerPoint presentation for this essay – images created by AI!

Continue Reading

Against Privacy Essentialism – My Response to Angel and Calo’s Critique of My Theory of Privacy

Against Privacy Essentialism - Solove Response to Angel and Calo 03

What is “privacy”? The concept of privacy has been elusive to define, but I developed a theory for understanding privacy about 20 years ago. Maria Angel and Ryan Calo recently published a formidable critique of my theory of privacy:

Maria P. Angel and Ryan Calo, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Colum. L. Rev. 507 (2024)

Their arguments are thoughtful and worth reckoning with, and I have just written a lengthy response, which I have entitled Against Privacy Essentialism. As I wrote in the response:

At the outset, I want to note that I welcome Angel and Calo’s critique. I am thrilled that they have so thoroughly engaged with my work, and I see their essay as an invitation to revisit the issue about how to conceptualize privacy. I thus write this essay with gratitude at having this opportunity to put my theory to the test, nearly twenty years after I started developing it, and seeing how it holds up today. Along the way, I will address other critiques of my pluralistic taxonomic approach by Professors Jeffrey Bellin, Eric Goldman, and David Pozen.

Here’s the abstract:

In this essay, Daniel Solove responds to Maria Angel and Ryan Calo’s critique of his theory of privacy. In their article, Distinguishing Privacy Law: A Critique of Privacy as Social Taxonomy, 124 Columbia Law Review 507 (2024), Angel and Calo note that although “Solove’s taxonomic approach appears to have exerted an extraordinary influence on the shape and scope of contemporary privacy law scholarship,” this approach “fails to provide a useful framework for determining what constitutes a privacy problem and, as a consequence, has begun to disserve the community.”

Solove argues that Angel and Calo wrongly view Solove’s conception of privacy as boundaryless and arbitrary, want the term “privacy” to do work it is not capable of, fail to show how the approach leads to bad consequences, and propose alternative approaches to conceptualizing privacy that fare worse on their own grounds of critique.

For the most part, Angel and Calo and several other critics of his theory view privacy in an essentialist manner. Their privacy essentialism involves their commitment to understanding privacy as having clear boundaries to demarcate it from other concepts and a definitive definition with a proper authoritative foundation.  

In this essay, Solove argues against privacy essentialism. This way of thinking unproductively narrows thought, creates silos, leads to the overly narrow or overly broad failed attempts at conceptualizing privacy, stunts the development of the field, and results in constricted and impoverished policymaking.  Solove argues that privacy essentialism leads to a dead end, and it merely provides the illusion of certainty and clarity.

For those of you interested in reading about my theory of privacy, it is developed in several works. The book is the most recent and complete version of the theory, and it incorporates, updates, and adds to the articles:

Continue Reading