News, Developments, and Insights

high-tech technology background with eyes on computer display

New Edition of Information Privacy Law Casebook – 20th Anniversary!

Casebook Information Privacy Law 8th Ed 05

I am delighted to announce that the new 8th edition of my casebook, INFORMATION PRIVACY LAW, with Professor Paul Schwartz is out in print!  This is a very special edition, as this year marks the 20th anniversary of the casebook.

More information about the book is at the Information Privacy Law casebook website.  You can order a review copy of the book on Aspen’s site. Here’s the new table of contents.  The ISBN is 9798886143355.

It has been a long journey since the first edition in 2003. I was fortunate to have Richard Mixter as my editor at Aspen. He believed in this project since the beginning. But it took quite a lot of work making the case for Aspen to publish the casebook, as there were only a handful of privacy courses at the time. Aspen ultimately decided to print the book as a special softcover.

Information Privacy Law Cover 1st Ed 03
1st Edition of the Information Privacy Law Casebook

Fortunately, the book sold well. Privacy courses started to take off. And in its second edition, the book was finally deemed worthy of having a hardcover edition like other subjects.

The book has grown quite a lot. In the first edition, many issues lacked cases and laws, so I had to use law review articles or hypotheticals. But soon, there were too many cases and laws. The casebook grew and grew. Originally, the book was 795 pages. The 7th edition clocked in at 1312 pages. For the 8th edition, we edited more tightly to trim the book to 1147 pages. It was a hard task because we had so much material to add.

For those interested in history or nostalgia, here is the table of contents and preface from the 1st edition.

New material in the 8th edition includes more FTC and CJEU cases, reproductive freedom post-Dobbs, and a lot of material on AI and algorithmic decision-making in the chapters on law enforcement (Chapter 4), consumer data (Chapter 9), and employment (Chapter 12). And we updated for new developments in EU law, cross-border-data transfers, standing, dark patterns, platform governance, scraping, state privacy laws, biometric privacy, and much more.

* * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. 

Sign up for Professor Solove’s Newsletter about his writings, whiteboards, cartoons, trainings, events, and more.

Video of My Talk for the Privacy Commissioner of Bermuda Event

Watch the video of my short lightning talk for a segment in a privacy conference put on by the Privacy Commissioner of Bermuda.  The topic I was asked to speak on is “what data protection is not.”

Webinar – Breaking Into Privacy Law: Strategies for Entry-Level Lawyers Blog

In case you weren’t able to make it to my recent webinar with Jared Coseglia (TRU Staffing Partners), you can watch the replay here.  We had a great discussion about strategies for entering the privacy field.

Button Watch Webinar 02

Continue Reading

Cartoon – Halloween AI Algorithm Training

Cartoon - Halloween AI Algorithm - TeachPrivacy Training 02 small

Here’s my latest cartoon – for Halloween. This cartoon is inspired by many companies now starting to use their users’ data to train their AI algorithms.  Recently, Elon Musk’s X (formerly Twitter) changed its privacy notice to indicate that it would start using user data for AI training. As the famous saying goes, “If you’re not paying for the product, you are the product.”

Some other Halloween cartoons:

Privacy Law Frankenstein

Cartoon Frankenstein Privacy Law - TeachPrivacy Training 05 small

Big Data Halloween 

Cartoon Big Data Halloween - TeachPrivacy Privacy Training 02 small

Continue Reading

BU Law Review Symposium on Privacy Law

I will be speaking on November 3, 2023 at a Boston University Law Review symposium: Information Privacy Law at the Crossroads. From the symposium description:

This symposium aims to gather leading privacy scholars to examine the current state of privacy law and theory and explore its direction. With the introduction of the first bipartisan omnibus bill in Congress in a decade, President Biden calling for better privacy legislation, and states enacting a flurry of new privacy laws, it is an excellent time to revisit privacy law’s commitments and map its future in a world where people are exposed and exploited like never before.

The symposium will ultimately be in a published volume of the Boston University Law Review.  I’m writing the introduction with Professor Woodrow Hartzog (BU Law School), and our essay is titled, Kafka in the Age of AI and the Futility of Privacy as Control. Stay tuned, as we’ll be posting our draft on SSRN soon.

There is an amazing lineup of speakers at the symposium. They include:

  • Alexis Shore (Boston University)
  • Daniel Solove (George Washington University)
  • Neil Richards (Washington University in St. Louis)
  • Maria Angel (University of Washington, School of Law)
  • Salome Viljoen (University of Michigan Law School)
  • Christopher Robertson (Boston University School of Law)
  • Meg Jones (Georgetown University)
  • Ngozi Okidegbe (Boston University School of Law)
  • Paul Schwartz (University of California Berkeley)
  • Helen Nissenbaum (Cornell Tech)
  • Julie Dahlstrom (Boston University School of Law)
  • Anita Allen (University of Pennsylvania, Carey Law School)
  • Khiara M. Bridges (University of California Berkeley, School of Law)
  • Scott Skinner-Thompson (University of Colorado)
  • Ari Waldman (University of California Irvine, Law)
  • Claudia Haupt (Northeastern University, School of Law)
  • Danielle Citron (University of Virginia, School of Law)
  • Margot Kaminski (University of Colorado, School of Law)
  • Jasmine McNealy (University of Florida)
  • Zahra Takhshid (University of Denver, Strum College of Law)
  • Rory Van Loo (Boston University School of Law)
  • Pauline Kim (Washington University in St. Louis, School of Law)
  • Paul Ohm (Georgetown University Law Center)
  • William McGeveran (University of Minnesota, Law School)
  • Chris Gilliard (Just Tech Fellow at the Social Science Research Council)

More information about the event is here.

Continue Reading

Personal and Sensitive Data

Personal and Sensitive Data

NOTE: This post was originally part of my special newsletter on LinkedIn – Privacy+Tech Insights. This is a different newsletter from my weekly newsletter. My LinkedIn newsletters are more infrequent and typically involve a more focused analysis of a particular issue.  

quiet revolution has been going on with personal and sensitive data. There have been many notable developments. In the past few years, we’ve witnessed the triumph of the EU approach to defining personal data and to designating special protections for sensitive data.

We’ve seen a growing recognition in the law that:

  • the overwhelming modern consensus in privacy law is to define personal data as identified or identifiable data
  • new laws (post-GDPR) are now overwhelmingly recognizing sensitive data, even in the U.S.
  • various pieces of non-personal data can, in combination, be identifiable
  • the ability to make inferences about data can’t be ignored
  • non-sensitive data that gives rise to inferences about sensitive data counts as sensitive data

These are significant developments, yet oddly, they haven’t made headline news.

Continue Reading

AI, Algorithms, and Awful Humans

Article - Solove Matsumi AI Algorithms Awful Humans 02

I’m very excited to post my new short draft essay with Hideyuki (“Yuki”) Matsumi (Vrije Universiteit Brussel). The essay, which is a quick read (just 19 pages), is entitled: AI, Algorithms, and Awful Humansforthcoming 92 Fordham Law Review (2024). It will be part of a Fordham Law Review symposium, The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies (Nov. 3, 2023).

The essay argues that various arguments about human versus machine decision-making fail to account for several important considerations regarding how humans and machines decide. You can download the article for free on SSRN. We welcome feedback.

Download Button 02 small

Here’s the abstract:

This Essay critiques a set of arguments often made to justify the use of AI and algorithmic decision-making technologies. These arguments all share a common premise – that human decision-making is so deeply flawed that augmenting it or replacing it with machines will be an improvement.

In this Essay, we argue that these arguments fail to account for the full complexity of human and machine decision-making when it comes to deciding about humans. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.

It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior.  Some of the human dimensions to decision-making that cause great problems also have great virtues. Additionally, algorithms often rely too much on quantifiable data to the exclusion of qualitative data. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Having humans oversee machines is not a cure; humans often perform badly when reviewing algorithmic output.

We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.

Click the button to download the essay draft for free.

Download Button 02 small

* * * *

Continue Reading

Cartoon: Tech Companies, Innovation, and Regulation

Cartoon - Tech Companies and Regulation - TeachPrivacy 01b small

Here’s my new cartoon about how many tech companies extol innovation, yet seem to lose that innovative spirit when it comes to regulation. With the right incentives, it’s amazing how tech companies can rise to the challenge. They can certainly innovate to address regulatory demands; instead, they often send in lobbyists to pout and complain or to block laws. It would be better for companies to try to innovate for regulation rather than fight it.  Policymakers might look to use some carrots rather than just sticks. Positive incentives can help steer tech companies to address regulatory concerns.

Continue Reading