Heightened protection for sensitive data is becoming quite trendy in privacy laws around the world. Originating in European Union (EU) data protection law and included in the EU’s General Data Protection Regulation (GDPR), sensitive data singles out certain categories of personal data for extra protection. Commonly recognized special categories of sensitive data include racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, sexual orientation and sex life, biometric data, and genetic data.
Although heightened protection for sensitive data appropriately recognizes that not all situations involving personal data should be protected uniformly, the sensitive data approach is a dead end. The sensitive data categories are arbitrary and lack any coherent theory for identifying them. The borderlines of many categories are so blurry that they are useless. Moreover, it is easy to use non-sensitive data as a proxy for certain types of sensitive data.
Personal data is akin to a grand tapestry, with different types of data interwoven to a degree that makes it impossible to separate out the strands. With Big Data and powerful machine learning algorithms, most non-sensitive data can give rise to inferences about sensitive data. In many privacy laws, data that can give rise to inferences about sensitive data is also protected as sensitive data. Arguably, then, nearly all personal data can be sensitive, and the sensitive data categories can swallow up everything. As a result, most organizations are currently processing a vast amount of data in violation of the laws.
This Article argues that the problems with the sensitive data approach make it unworkable and counterproductive — as well as expose a deeper flaw at the root of many privacy laws. These laws make a fundamental conceptual mistake — they embrace the idea that the nature of personal data is a sufficiently useful focal point for the law. But nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm.
To be effective, privacy law must focus on use, harm, and risk rather than on the nature of personal data. The implications of this point extend far beyond sensitive data provisions. In many elements of privacy laws, protections should be based on the use of personal data and proportionate to the harm and risk involved with those uses.
This webinar focused on themes from Danielle Citron’s new book, The Fight for Privacy: online harassment and hate, Section 230, and how privacy invasions disproportionately are targeted at women. We discussed the implications of Dobbs, where the U.S. Supreme Court struck down the right to abortion. As Elizabeth Joh points out in a recent article, the world post-Dobbs is very different from the world pre-Roe. We are living in a surveillance society, and the government has unprecedented powers to monitor people’s intimate lives. You can watch it here.
The battle against the US News Law School Rankings has finally begun. After decades of groaning and grumbling about how bad the rankings are, many top law schools have said they are withdrawing from the rankings, including many out of the top 10. I applaud this move, but I fear that law schools might break out the champagne too early. The battle might be won, but the war might ultimately be lost unless law schools do more than just withdraw.
Law schools aren’t really dropping out of the rankings; they are just pledging to refuse to submit certain data that US News wants. US News issued a statement declaring that it will continue ranking whether law schools cooperate or not. The dragon hasn’t been slain; it’s just not going to get some of the food it wants.
I was recently on a terrific panel called Cybersecurity and Data Security: What Every Lawyer Should Know held by Penn State Dickinson Law. The program focused on the latest developments in cybersecurity and data privacy. The panel was moderated by Professor Daryl Lim, H. Laddie Montague Jr. Chair in Law at Penn St. Dickinson Law. Speakers included:
I recently gave a talk on Faculti about ideas in my recent book, BREACHED! WHY DATA SECURITY LAW FAILS AND HOW TO IMPROVE IT (Oxford University Press 2022), about how major security breaches could be prevented through new approaches to data security law. The Faculti platform provides a library of 8,000 video and audio insights from leaders in the academic and research community. You can check it out here. or below: