Here’s another cartoon about AI. Enjoy!
Kafka in the Age of AI and the Futility of Privacy as Control
I’m very pleased to post a draft of my forthcoming essay with Professor Woodrow Hartzog (BU Law), Kafka in the Age of AI and the Futility of Privacy as Control, 104 B.U. L. Rev. (forthcoming 2024). It’s a short engaging read – just 20 pages! We argue that although Kafka shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence.
You can download the article for free on SSRN. We welcome feedback.
Scroll down for some excerpts from our PowerPoint presentation for this essay – images created by AI!
Cartoon – AI Training
My new cartoon on AI training. Enjoy!
2023 Highlights: Cartoons, Webinars and Blog Posts
Here’s a roundup of my cartoons, webinars and blog posts for 2023.
CARTOONS
Tech Companies, Innovation, and Regulation
Halloween AI Algorithm Training
2023 Highlights: Training and Whiteboards
Here’s a roundup of my privacy training and whiteboard in 2023. I created short courses for AI and for technology and data ethics. These courses are partially created with AI. The narrator is AI, and some of the images in the AI course are created by AI. The Technology and Data Ethics course has both the narrator and images created by AI. I used a claymation style, and I think that the AI really did a great job (though it took countless attempts in prompting and many tries to get things right).
I created regional whiteboards and courses (Latin America and Asia). These whiteboards and courses summarize general themes and trends in privacy laws in these world regions.
Plus, I created more courses in a series on privacy and data management topics: Secondary Use and Data Minimization. My goal with this series is to cover the basic concepts and practices in a privacy program. Previous courses include Data Mapping, Vendor Management, Data Protection Impact Assessments, Data Subject Rights, Data Retention, and other topics.
Artificial Intelligence and Data Ethics Training
Artificial Intelligence (AI) Training Course
Technology and Data Ethics Training Course
2023 Highlights: Scholarship
Here’s a roundup of my scholarship for 2023. With Professor Paul Schwartz, I published a new edition of my casebook, Information Privacy Law as well as new editions of the topical paperbacks (will be in print by the end of December). One article came out in print, and I have several paper drafts in various stages of the publication process. See below for details.
New Edition of Information Privacy Law Casebook
(Aspen 2024) (with Professor Paul Schwartz)
New Editions of Information Privacy Law
Topical Paperback Casebooks
(Aspen 2024) (with Professor Paul Schwartz)
Webinar – Privacy Law in the 21st Century: Past, Present, Future Blog
In case you missed my webinar on Privacy Law in the 21st Century, you can watch the replay here. I had a great discussion with Salomé Viljoen (Michigan Law), Ari Waldman (U.C. Irvine Law), and Margot Kaminski (Colorado Law) about how privacy law has been evolving.
GW Law School Launches the GW Center for Law and Technology
I’m excited to share a press release from GW Law School announcing our new GW Center for Law and Technology. Through the Center, we’re building out our privacy and tech curriculum and activities. Recently, we have done the following:
Notable Privacy and Security Books 2023
Here are some notable books on privacy and security from 2023. To see a more comprehensive list of nonfiction works about privacy and security for all years, Professor Paul Schwartz and I maintain a resource page on Nonfiction Privacy + Security Books.
AI, Algorithms, and Awful Humans – Revised Version
Hideyuki (“Yuki”) Matsumi (Vrije Universiteit Brussel) and I have significantly revised our essay, AI, Algorithms, and Awful Humans, forthcoming 92 Fordham Law Review (2024). It will be part of a Fordham Law Review symposium, The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies. In response to great feedback, we have made many refinements and changes to our arguments. The essay is short (just 18 pages), and it’s a quick fun read.
The essay argues that various arguments about human versus machine decision-making fail to account for several important considerations regarding how humans and machines decide. You can download the article for free on SSRN. We welcome feedback.
Here’s the abstract:
A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process. Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. These arguments exert a powerful influence on law and policy.
In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic. We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated.
It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.
Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often don’t mix well. Humans often perform badly when reviewing algorithmic output.
We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.
Click the button to download the essay draft for free.
* * * *