PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Richards Why Privacy Matters 02

Professor Neil Richards has published a new book, Why Privacy Matters (Oxford University Press 2021), and it’s the perfect holiday gift for anyone interested in privacy.

Neil Richards is one of the worldā€™s leading privacy experts. He holds the Koch Distinguished Chair in Law at Washington University in St. Louis where he also directs the Cordell Institute. He has published widely across the full range of privacy issues, and he has served as an expert in a number of high-profile privacy cases, most notably for the Irish government in the case of Data Protection Commissioner v. Facebook, more commonly known as ā€œSchrems 2.ā€

Richards Why Privacy Matters 01

Why Privacy Matters is a terrific book. It is clear and compelling. Here’s my blurb of the book: “Neil Richards argues powerfully and eloquently about the importance of privacy in our lives and society. Insightful and nuanced, but also very accessible and clear, Why Privacy Matters is essential reading for anyone concerned about individual identity and freedom in a world where digital technologies are spinning out of control.”

I had a chance to ask Neil some questions about his new book and about the future of privacy.

SOLOVE: Letā€™s start with the obvious question: Why does privacy matter?

RICHARDS: At the most basic, privacy matters for a very simple reason. Privacy is about power, because information is power, and human information confers power over human beings. Privacy rules are essential because they constrain that power. We need good privacy rules to restrain the power of profiling, sorting, nudging, manipulation, and control over human behavior that human information confers. And we should craft our privacy rules to promote important human values like identity, freedom, and consumer protection. Thatā€™s why privacy matters.

SOLOVE: What currently are the biggest threats to privacy? What emerging threats do you see over the next few decades?

RICHARDS: There are a whole host of privacy threats right now, and they operate at a variety of levels, but they all share one very important thing in common ā€“ government and corporate entities seek to collect and use human information to shape our behavior in ways that serve their interests. Let me offer four.

First, thereā€™s a huge privacy threat posed by facial recognition technologies like the one ClearView AI is creating by scraping human photos with attached names off websites and social media profiles. Why are they doing this? So that they can sell their facial recognition tools to law enforcement and ICE to identify people on the street, whether they want them for questioning, because they suspect they might be undocumented, or because they want to intimidate protestors.

Second, Facebook never tires of finding ways to collect human information to (as theyā€™d put it) ā€œmonetizeā€ it, often by selling advertisements. Their much-hyped ā€œmetaverseā€ represents a potential human information bonanza because if weā€™re living our lives in a digital world created and completely mediated by them, all of our interactions, words, glances, and emotions will be right there for them to harvest, process, and use to sell ever-more relevant ads.

Third, while weā€™ve noticed that algorithms and artificial intelligence systems are frequently biased, one proposed solution to this system is that ā€œwe just need better data.ā€ Iā€™m skeptical that this clamor for more human information ā€“ something Shoshana Zuboff helpfully calls ā€œthe collection imperativeā€ ā€“ will ever create unbiased systems, because AI systems are human creations and they reflect the biases of their creators. But it doesnā€™t stop the constant drumbeat for more and more data collection, even if the payoff of that data collection is likely to be dubious at best.

Fourth, and finally, I worry about the integrity of our elections and other democratic processes as we enter the next phase of the information age. As both data-hungry AI technologies like deepfakes, targeted political ads, data-driven gerrymandering, and personalized political messages become ever-more effective, I worry that we run the risk of our political process becoming nothing more than a data science arms race rather than a debate about policy and ideas. We should treat political gerrymandering and the Cambridge Analytica scandal as nothing less than loud alarm calls for the future of democratic self-government as we have known it. And itā€™s the deployment of human information in a largely unconstrained way that is enabling these advances in political manipulation, just as ad-tech companies pioneered the strategies of consumer manipulation through surveillance-based ads that we are all sadly all-too-familiar with. Itā€™s this last one in particular that keeps me up at night.

Facial Recognition

SOLOVE: In the book, you debunk a few myths about privacy. What are some of the myths?

RICHARDS: There are so many myths about privacy, and I take on a bunch of them in the book! The biggest one is the foolish idea that privacy is dead or dying. This is just nonsense, and itā€™s actually the widespread belief in this myth that motivated me to write the book. There are certainly a bunch of privacy threats out there, but when we understand why companies and governments are so keen to acquire and use human information, we can see the bigger picture. That bigger picture is that human information confers power. The good news is that the law is actually pretty good at constraining power through rules. We have a choice to put privacy-protective rules in place or not, but either way weā€™re going to have a privacy rule of some sort. From this perspective, privacy rules ā€“ rules constraining the extent to which human information gets collected are used ā€“ are inevitable. So privacy isnā€™t dead or dying, but it is up for grabs.

A second myth about privacy is that it is about ā€œcreepiness,ā€ which is the felt violation of social expectations about privacy. Maybe itā€™s understandable to think about privacy in this way, but creepiness is actually a terrible test for us to use if we want to find out if there are problematic uses of human information going on. Creepiness is over-inclusive, because many information uses that may seem threatening at first (like using your credit card on a website) actually turn out to be pretty great. Itā€™s also under-inclusive, because itā€™s often the information uses we never know about (like secret credit or employment scoring or the denial of opportunities by racist algorithms) that are the most problematic. Yet because these actions are secret, they never trigger our creepiness reaction. Finally, creepiness is a terrible test for privacy problems because it is malleable. Creepiness rests on the violation of a social norm, but those social norms are not only subject to change, but technology companies and governments work hard to lower privacy expectations all the time (think of social networks or airport security). Creepiness is exciting, but it distracts us from the real reason why privacy matters, which is that information is power. We need to stop talking about information uses as creepy and see them for what they really are.

Richards Why Privacy Matters 03

Iā€™ll offer one more, which may be quite provocative to a privacy professional audience. I think that ā€œprivacy as controlā€ is a myth, and there are at least four reasons why this is so.

First, there are just too many services and too many privacy settings for meaningful control to work in practice. As they might as they might express the point in Silicon Valley, control simply doesnā€™t scale.

Second, control is often an illusion ā€“ companies may give us a few choices, but never the ones that matter, like the ability to opt-out of surveillance-based ads.

Third, control is a trap. When companies tell us to check our privacy setting and we donā€™t do it, we feel guilty for failing to do the privacy work, even though that work is both overwhelming and offers only an illusory benefit. Yet when we fail to do the impossible, we still feel like our lack of control over our information is somehow our fault.

Fourth, and finally, control is insufficient. Even if we spend our lives tweaking our limited privacy settings, we can still be manipulated by dark patterns, and choices that other people make often affect our privacy (like if our sibling uploads their genome to a public repository).  Just like with public health, vaccines, and masking, we depend on other peopleā€™s choices too. And we also depend on the design of the infrastructure within which we make our decisions. So I think we should jettison control as the primary mechanism for privacy protection and replace it with something substantive like a duty of data loyalty.

SOLOVE: Weā€™re seeing a significant regulatory response to protect privacy around the world. Are these new privacy laws, such as GDPR and CCPA, effective? Is the law heading in the right direction?

RICHARDS: I have deeply mixed feelings about the GDPR and the CCPA. On the one hand, they are really well-meaning pieces of legislation that try to require ethical data processing throughout society, which is a good thing. This is particularly true for the GDPR, which rests on the twin fundamental rights of privacy and data protection secured by the European Union Charter. The CCPA doesnā€™t rest on those foundations, and so itā€™s a much weaker law as a result. But both the GDPR and the CCPA rest to a meaningful degree on notions of control and informational self-determination, which are problematic for reasons weā€™ve already talked about. The GDPR does have the additional requirement of a ā€œlawful basis for processing,ā€ but that can include a pretty broad category of so-called ā€œlegitimate interests of the controller,ā€ and until we have more guarantees from the EU courts about how strict that would be in practice, I worry that the GDPR (and particularly the CCPA and any US federal version of the GDPR) might just end up normalizing pervasive surveillance and manipulation.

Personal Data Collection

Procedural approaches to data hygiene on the GDPR model are important, but I would prefer laws that have more teeth, such as requiring data to be used for the benefit of ā€œdata subjectsā€ in most cases, and preventing some of the self-dealing in valuable human information that seems to be rampant around the world. Again, Iā€™m quite excited about the potential of a duty of loyalty, and about beefing up notions of substantively unfair, unreasonable, or abusive data practices. Ultimately, we need to ask ourselves whether our privacy and data protection rules are making life better for humans rather than for corporations. I think we can do better than we have been doing on that score; and this is particularly in the United States, where Congressā€™ failure to pass meaningful baseline privacy regulations for the past fifty years should be seen for what it isā€”a complete regulatory failure that should be regarded as something of a national shame. Privacy matters a great deal to the kind of any society that reasonable humans might want to inhabit, and we have to do much better on privacy, particularly in the United States.

SOLOVE: Thanks, Neil, for your thoughtful answers. His book is Why Privacy Matters (Oxford University Press 2021).

This Friday, December 3, 2021, there will be a symposium on Neil’s book at Washington University School of Law. I’ll be speaking at the event (virtually) along with Danielle Citron, Ryan Calo, Margot Kaminski, Sarah Igo, Julie Cohen, Ari Waldman, Bill McGeveran, Paul Ohm, and more!

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training. He also posts at his blog at LinkedIn, which has more than 1 million followers.

Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum an annual event designed for seasoned professionals. 

NEWSLETTER: Subscribe to Professor Solove’s free newsletter
TWITTER: Follow Professor Solove on Twitter.

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03

Prof. Solove’s Privacy Law Whiteboard Library

Whiteboard Library - by Daniel Solove - TeachPrivacy Training 04