PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Ari Waldman Interview

I’m delighted to be interviewing Professor Ari Waldman (Northeastern Law), who has published Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (Cambridge University Press 2021), a provocative new book about privacy law and privacy programs at corporations.

Ari Waldman Interview In his book, Ari delivers an eviscerating critique of privacy law and of the approach to protect privacy through internal privacy programs at organizations. Although I diverge from Ari in that I believe that that many privacy law provisions and privacy programs are generally a good thing, his critique makes many salient points that must be reckoned with. Privacy law and compliance have significant shortcomings that should be addressed.

Here’s my blurb on his book: “Ari Waldman peels back the curtain on internal privacy practices at the most powerful tech companies to reveal an alarming trend: Despite robust privacy programs, teams of employees devoted to protecting privacy, and significant laws and regulations requiring many internal measures to safeguard privacy, the reality on the ground is that these things are often failing. Waldman provocatively contends that corporate power turns compliance with even robust privacy laws into an often hollow exercise, even though many employees within companies really care about privacy. As legislatures rush to pass privacy laws, Industry Unbound is a wake up call that these efforts will not end the nightmare. This eye-opening and unsettling book is also constructive, as it offers productive recommendations for a new direction in privacy law. Lively, alarming, and insightful, Industry Unbound deftly unites theory, practice, and law. It is essential reading for anyone who cares about the future of privacy.”

Below, I chat with Ari about some of the key points in his book.

SOLOVE: We’re living in a golden era of privacy law. In the past 5 years, we’ve seen the GDPR and countless laws around the world and in the states. Yet, you point out that this law is failing. Broadly, in what ways is the law coming up short?

WALDMAN: The GDPR is a process-oriented law, and tech companies are really good at neutering process into check-box compliance. In addition to guaranteeing certain individual rights, like rights to access our data, correct it, delete it, and so on, the GDPR enlists data collectors and processors in their own governance through a series of compliance obligations, like privacy impact assessments, privacy offices, and record keeping. But this kind of compliance happens within organizational structures that are designed to make profit in informational capitalism; that is, they are designed to make money while shunting privacy to the side. Inside tech companies, privacy impact assessments become little more than check box forms.

I saw this firsthand. At two different companies I observed, PIAs were reduced to a small chart that engineers had on their desks. At one company, they were told to “just check ‘no’” in all the boxes to indicate that nothing they were doing posed any privacy risks. When I spoke to the privacy professionals and lawyers who wrote the document, they said they “had to make things easy for engineers who don’t understand privacy” as if it’s the engineer’s fault. But even if they hadn’t turned PIAs into a meaningless paper-pushing exercise, every privacy professional talked about PIAs in terms of highlighting risks to the company (of investigation or litigation) rather than what PIAs were meant to do—namely, surface privacy risks to consumers. A subtle shift, but one that made it a lot harder for any privacy concern to boil up to the top.

At big companies, much of their GDPR compliance requirements are outsourced to vendors, who have created a kind of Mad Libs for privacy. They design boilerplate forms that are already more than half complete and company compliance officers just fill in a few details. They automate record keeping while the company continues head first into designing ever more data extractive products.

SOLOVE: You are very skeptical of governance mechanisms for privacy – privacy officers, audits, documentation, PIAs, data maps, etc.  At some points in the book, you suggest that these measures collectively end up being a facade to legitimize practices that are not protective of privacy. Yet many privacy professionals and people doing these endeavors really care about privacy and want to do something meaningful. Are privacy programs doomed to fail? Is the problem that these measures are inherently hollow or is the problem that the measures aren’t being done in the right way or under the right set of conditions?  Is there a way to strengthen privacy programs so that they don’t have the pitfalls you identify?  Or is the very approach of using internal governance flawed?

WALDMAN: The very approach is flawed not because privacy professionals aren’t good people or because they don’t care about privacy. In real life, there aren’t many C. Montgomery Burns-type villains plotting how to undermine our privacy (although, those guys do exist). The problem is that all these programs and all these privacy professionals exist within corporate structures that co-opt them. But it gets worse. Tech companies don’t just co-opt their workforce; they also habituate them into performing privacy in ways that seem privacy protective but, in fact, are not, creating a sense of false consciousness.

Consider the example of one privacy team at a medium-sized tech company. I spent more than a week with them. When we down to talk about their work, they noted how busy they were. Their work product was staggering: lots of reports, policies, reviews, and impact assessments. When I read all of them, they all sounded pretty much the same. Everything this team did was about giving consumers more choices and more “control” over their privacy. They published a report on transparency and making privacy policies more readable. They wrote internal policies about giving users the opportunity to click “agree” to data collection. They completed what they called a “comprehensive” privacy review of the company’s products, but only focused on the products’ privacy policies, consent buttons, and cookie consents

What was really remarkable about this was that as they spoke to me about their work and their motivations for taking jobs in privacy, their answered presupposed definitions of privacy far more robust and broader than just choice. And yet, all their work was about it. Almost exclusively. When I asked them about that disconnect, I got a lot of incredulous looks and ex post justifications. “No, we’re enhancing trust by making privacy policies more readable” or “Transparency can’t be bad” or “We meet our customers’ expectations by giving them the tools to decide how and when their data will be used.” That’s either cognitive dissonance or willful blindness. Or maybe people just don’t like to be called out on their inconsistencies.

Ari Waldman Interview

SOLOVE: How would you evaluate whether a privacy program is effective?  If you were a regulator and could determine the standards to evaluate the quality of a privacy program, what would you look for?  What sort of standards should the law impose?

WALDMAN: I honestly do not see any possible way an internal privacy program, run by corporate employees and leveraged as part of a public-private partnership in interpreting and implementing the law, could ever function capably as a form of real governance and accountability. If I were a regulator, I wouldn’t want to rely on any internal program. I would want sufficient powers and funds to monitor the company myself. True democratic governance in which tech companies serve the people requires regulatory access to internal documents, ongoing monitoring, and really tough penalties. Even that won’t be enough. The only way to know if a privacy program is effective is to look at the results not at the processes. I don’t care how many pages of work product you spit out of your printer. Nor do I care if an engineer checked “no” for privacy impacts. I care if the products your company is creating materially protect privacy.

SOLOVE: What directions should privacy law take to be more effective? 

WALDMAN: We’re on a dangerous path in privacy law, making the same mistakes over and over again. Almost all proposals for comprehensive privacy law in the US look the same: individual rights and procedural compliance. None of that is going to work, for reasons I discuss at length in the book (building on the work of many other scholars, including you, Julie Cohen, and Salome Viljoen).

Until we can achieve the kind of real structural change that ensures that technology serves the people rather than the people serving technology companies, we should consider several non-reformist reforms. Privacy law is not just about data flows. Privacy law is labor law, with strong protections for privacy workers so they jobs are not so precarious, as I discuss in the book. Privacy law is environmental law, so companies cannot destroy delicate ecosystems when they build infrastructure to transport data across continents. Privacy law is criminal law, with stiff criminal penalties for privacy invasions and misrepresentations by corporate management. Privacy law is anti-competition law, with Big Tech broken into competitive businesses. Privacy law must also be committed to anti-subordination, protecting the rights of the marginalized whose privacy is the most at risk. In the end, maybe it’s just about perspective. We need to stop thinking that part of privacy law’s job is to foster innovation. The job of privacy law is to counteract corporate power to commodify, manipulate, and subordinate the people. I don’t see us getting there any time soon, but hopefully, this book gives policymakers a reason to stop giving tech companies the power to co-opt the law to help realize their endless crusade for profit over people.

SOLOVE: Thanks, Ari, for your thoughts. Ari’s book is Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (Cambridge University Press 2021). 

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training. He also posts at his blog at LinkedIn, which has more than 1 million followers.

Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum an annual event designed for seasoned professionals. 

NEWSLETTER: Subscribe to Professor Solove’s free newsletter
TWITTER: Follow Professor Solove on Twitter.

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03

Prof. Solove’s Privacy Law Whiteboard Library

Whiteboard Library - by Daniel Solove - TeachPrivacy Training 04