Halloween is this week, so I thought I’d bring this older cartoon out of the archive. I updated it a bit. Enjoy!
How does China’s new Personal Information Protection Law (PIPL) compare to the European Union’s GDPR? In this post, I provide a quick PIPL vs. GDPR comparison. In comparing the PIPL with the GDPR, I will note a few key similarities and differences — my comparison is not comprehensive.
Comparing PIPL and GDPR: Similarities
A few notable similarities between the PIPL and GDPR include:
- Both the PIPL and GDPR are extraterritorial.
- The PIPL and GDPR define personal data as involving identified and identifiable natural persons.
- The PIPL uses the GDPR’s lawful basis approach to data processing. Many other Asian privacy laws use the consent-based approach or an approach akin to the US approach of notice-and-choice.
- Both the PIPL and GDPR have special protections for sensitive data, but they differ on the types of data they recognize as sensitive.
- Both the PIPL and GDPR have a data breach notification requirement.
- The PIPL and GDPR recognize many of the same rights.
- Both the PIPL and GDPR require workforce training.
- Under certain circumstances, both the PIPL and GDPR require DPOs.
- Both the PIPL and GDPR require data protection impact assessments (DPIAs) in certain situations.
Comparing PIPL and GDPR: Differences
A few notable differences between the PIPL and GDPR include:
I am pleased to announce that I created a new whiteboard and training course for China’s Personal Information Protection Law (PIPL).
The PIPL is China’s first comprehensive privacy law, and it has several notable similarities to the GDPR. There are also some key differences. In an earlier post, I provide a comparison between the PIPL and GDPR.
Some of the things we updated:
- We reordered the piece to discuss earlier on our theory of when harm should be required.
- We added a discussion of why recognizing privacy harm is important.
- We rethought the typology to add top-level categories and subcategories. We had received feedback from a number of people that the typology was unwieldy because we had too many categories and many seemed to overlap. Our new structure now has 7 top-level categories.
- We added short descriptions of each type of harm at the beginning of each section.
- We added commentary about the recent Supreme Court case on standing, TransUnion v. Ramirez.
- We added a diagram of the harms, which is above.
There are other changes, too, but the ones above are the most relevant ones. We’re still editing the piece, so we welcome additional feedback. The piece will be published in 2022.
You can read the latest draft here.
This cartoon is about profiling. A profile consists of a particular set of characteristics and behaviors that are deemed as suspicious by law enforcement. Profiles can be created by people or generated by algorithms that identify suspicious things from data of known criminals or terrorists.
I recently created a privacy law whiteboard library page where I’ve gathered all the whiteboards I’ve been creating. Thus far, I have created more than 40 privacy law whiteboards.
Each whiteboard is a 1-page visual summary of a privacy law. A few from the page are below. I’ve made a few available for free, but most are only available on this page.
Whiteboards can be licensed for use in conference presentations or other individual uses. There is also a way to license all the whiteboards as a package. For organizational uses or other uses, please reach out to us.
Back in 1993, Professor Oscar Gandy, Jr. wrote one of the most insightful and prescient books about privacy: The Panoptic Sort: A Political Economy of Personal Information.
Oscar Gandy is an emeritus professor with the Annenberg School for Communication at the University of Pennsylvania, having retired from active teaching in 2006. He has continued to publish in the areas of the political economy of communication and information, focusing most recently on the development and use of algorithmic technology.
I recently published a short essay with Professor Danielle Citron critiquing the recent Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021) where the Court held that plaintiffs lacked standing to use FCRA’s private right of action to sue for being falsely labeled as terrorists in their credit reports.
The essay is here:
Daniel J. Solove & Danielle Keats Citron, Standing and Privacy Harms: A Critique of TransUnion v. Ramirez, 101 B.U. L. Rev. Online 62 (2021)
Here’s a short abstract:
Through the standing doctrine, the U.S. Supreme Court has taken a new step toward severely limiting the effective enforcement of privacy laws. The recent Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021) revisits the issue of standing and privacy harms under the Fair Credit Reporting Act (FCRA) that began with Spokeo v. Robins, 132 S. Ct. 1441 (2012). In TransUnion, a group of plaintiffs sued TransUnion under FCRA for falsely labeling them as potential terrorists in their credit reports. The Court concluded that only some plaintiffs had standing – those whose credit reports were disseminated. Plaintiffs whose credit reports weren’t disseminated lacked a “concrete” injury and accordingly lacked standing – even though Congress explicitly granted them a private right of action to sue for violations like this and even though a jury had found that TransUnion was at fault.
In this essay, Professors Daniel J. Solove and Danielle Keats Citron engage in an extensive critique of the TransUnion case. They contend that existing standing doctrine incorrectly requires concrete harm. For most of U.S. history, standing required only an infringement on rights. Moreover, when assessing harm, the Court has a crabbed and inadequate understanding of privacy harms. Additionally, allowing courts to nullify private rights of action in federal privacy laws is a usurpation of legislative power that upends the compromises and balances that Congress establishes in laws. Private rights of action are essential enforcement mechanisms.
Friday’s U.S. Supreme Court decision, TransUnion v. Ramirez (U.S. June 25, 2021), prompted me to release this cartoon about privacy harms that I created a while ago. In TransUnion, a group of plaintiffs sued TransUnion for falsely labeling them as potential terrorists in their credit reports. The Supreme Court held that only some plaintiffs had standing – those whose credit reports were disseminated. Plaintiffs whose credit reports weren’t disseminated lacked a “concrete” injury and accordingly lacked standing – even though Congress explicitly granted them a private right of action to sue for violations like this and even though a jury had found that TransUnion was at fault.
The TransUnion decision, authored by Justice Kavanaugh for a 5-4 majority, is wrong on so many levels. I wish the Supreme Court had read my recent article draft:
Danielle Keats Citron & Daniel J. Solove
forthcoming in B.U. L. Rev.
More background about the article is at my post here. I will write soon about the case.
For decades, I’ve been arguing that law schools must improve their programs for privacy law. A few years ago, I lead a group of academics and practitioners in crafting a letter to law school deans about why law schools must offer more in privacy law: An Open Letter to Law School Deans about Privacy Law Education in Law Schools. Recently, the International Association of Privacy Professionals (IAPP) came out with its guide, Privacy and Data Protection in Academia, A Global Guide to Curricula.
The guide wisely avoids trying to rank programs, and it contains a lot of very useful information. But I think that law schools need criteria to evaluate the strength of their programs, so I developed this list below of the key components of what I would consider to be a strong program. I’ve written about this before, but I continue to hone my thinking. Below are my latest thoughts: