I posted a draft of my new article, Murky Consent: An Approach to the Fictions of Consent in Privacy Law. It is just a draft, and I welcome feedback.
You can download it for free here:
Here’s the abstract:
Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. Regarding privacy, consent authorizes and legitimizes a wide range of data collection and processing.
There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates, where organizations post a notice of their privacy practices and then people are deemed to have consented if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.
Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to make decisions about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.
In this Article, I contend that in most circumstances, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.
Abandoning consent entirely in most situations involving privacy would involve the government making most decisions regarding personal data. But this approach would be problematic, as it would involve extensive government control and micromanaging, and it would curtail people’s autonomy. The law should allow space for people’s autonomy over their decisions, even when those decisions are deeply flawed. The law should thus strive to reach a middle ground, providing a sandbox for free play but with strong guardrails to protect against harms.
Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Instead of providing extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. This would allow for a degree of individual autonomy but with powerful guardrails to limit exploitative and harmful behavior by the organizations collecting and using personal data. In the Article, I propose some key guardrails to use with murky consent.
Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.
NEWSLETTER: Subscribe to Professor Solove’s free newsletter
TWITTER: Follow Professor Solove on Twitter.