PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

First Amendment Expansionism 02

The recent district court decision in NetChoice v. Bonta (N.D. Cal., Sept. 18, 2023) holding that the California Age-Appropriate Design Code (CAADC) likely violates the First Amendment is a ridiculously expansive interpretation of the First Amendment, one that would annihilate most regulation if applied elsewhere.  This decision is one of a new breed of opinions that I will call “First Amendment expansionism,” which turn nearly everything in the universe into a free speech issue.  The Fifth Circuit recently held that the government’s encouraging platforms to take down misinformation and harmful content was a First Amendment violation because somehow it was unduly coercive . . . as if these platforms, which are some of the most powerful organizations the world has ever seen, will lack the courage to stand their ground whenever the government says “boo.” But I digress . . .

For example, according to the court, a DPIA implicates free speech because it “requires a business to express its ideas and analysis about likely harm.” The court argues:

It therefore appears to the Court that NetChoice is likely to succeed in its argument that the DPIA provisions, which require covered businesses to identify and disclose to the government potential risks to minors and to develop a timed plan to mitigate or eliminate the identified risks, regulate the distribution of speech and therefore trigger First Amendment scrutiny.

This reasoning could apply to any requirement that a business to document its policies and procedures or conduct risk analysis or have contracts with vendors. According to Judge Freeman, requirements to provide information about privacy practices is “requiring speech.” Requirements to estimate age “impede the ‘availability and use’ of information and accordingly to regulate speech.” According to Judge Freeman, nearly everything law might require can be recast in terms of requiring speech or affecting speech. For example, under Judge Freeman’s reasoning, data security requirements such as having policies or documenting processes would involve requiring speech. Doing a risk assessment would involve required speech.  Under this reasoning, it’s hard to imagine what wouldn’t be involve speech. Beyond privacy, much other regulation would implicate speech, such as required nutrition labels, product warnings, and mandatory disclosures.  One could argue that requirements to cooperate with regulators for inspections and investigations would involve speech — after all, these require that someone at a company communicate with regulators.

The court repeatedly invokes the U.S. Supreme Court case, Sorrell v. IMS Health Inc., 564 U.S. 552, 567 (2011). But that case merely faulted a law for being specifically focused on speech about a particular product (pharmaceuticals), and the U.S. Supreme Court invoked HIPAA as a counterexample of a regulation that would not violate the First Amendment by singling out particular viewpoints. HIPAA contains countless restrictions on the use and disclosure of information and requirements for risk analysis, documentation, policies, training, vendor agreements, privacy notices, and more. All of these things would implicate speech on Judge Freeman’s expansionist conception of the First Amendment.

On the CAADC’s age estimation provisions, I think there is more of a reasonable First Amendment argument, though it is quite speculative.  Judge Freeman concludes that “the steps a business would need to take to sufficiently estimate the age of child users would likely prevent both children and adults from accessing certain content.” Maybe. But maybe not. The issue depends upon how different technologies work and their availability, cost, and feasibility. It is an empirical question whether age estimation would likely prevent access.  But yes, age estimation could implicate the First Amendment.

The CAADC is not a paradigm example of a well-drafted law. The court faults it for a lack of clear standards, and the court is right that the law could be more clear. For example:

The last of the three prohibitions of CAADCA § 31(b)(7) concerns the use of dark patterns to “take any action that the business knows, or has reason to know, is materially detrimental” to a child’s well-being. The State here argues that dark patterns cause harm to children’s well-being, such as when a child recovering from an eating disorder “must both contend with dark patterns that make it difficult to unsubscribe from such content and attempt to reconfigure their data settings in the hope of preventing unsolicited content of the same nature.” The Court is troubled by the “has reason to know” language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being. And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content. Given the other infirmities of the provision, the Court declines to wordsmith it and excise various clauses, and accordingly finds that NetChoice is likely to succeed in showing that the provision as a whole fails commercial speech scrutiny. (citations omitted)

Certainly, the law could be better drafted. But the court is viewing harm too narrowly — there is a harm in children being manipulated by dark patterns even if it doesn’t lead to dire outcomes. Being manipulated is in itself a harm. See my article with Danielle Citron, Privacy Harms,, 101 B.U. L. Rev. 793 (2022).

The court’s First Amendment expansionism is troubling. The First Amendment is not an everything bagel. First and foremost, the First Amendment is about promoting freedom of thought, belief, and speech for individual flourishing and democracy. But increasingly, the First Amendment is being turned into a cudgel for corporations to fend off regulation, thwart accountability, evade consumer protection, and maximize profits.

* * * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter
TWITTER: Follow Professor Solove on Twitter.

Subscribe to Solove’s Free Newsletter
Professor Solove's Newsletter on Privacy and Security

Prof. Solove’s Privacy Training: 150+ Courses

TeachPrivacy Privacy Awareness Training 03a