Hideyuki (“Yuki”) Matsumi (Vrije Universiteit Brussel) and I have significantly revised our essay, AI, Algorithms, and Awful Humans, forthcoming 92 Fordham Law Review (2024). It will be part of a Fordham Law Review symposium, The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies. In response to great feedback, we have made many refinements and changes to our arguments. The essay is short (just 18 pages), and it’s a quick fun read.
The essay argues that various arguments about human versus machine decision-making fail to account for several important considerations regarding how humans and machines decide. You can download the article for free on SSRN. We welcome feedback.
Here’s the abstract:
A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process. Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. These arguments exert a powerful influence on law and policy.
In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic. We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated.
It is wrong to view machines as deciding like humans do, but better because they are supposedly cleansed of bias. Machines decide fundamentally differently, and bias often persists. These differences are especially pronounced when decisions have a moral or value judgment or involve human lives and behavior. Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make – and might never be able to make.
Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often don’t mix well. Humans often perform badly when reviewing algorithmic output.
We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism. For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.
Click the button to download the essay draft for free.
* * * *
Yuki and I also authored another article, still in draft form:
The Prediction Society: Algorithms and the Problems of Forecasting the Future
The paper is available for free on SSRN.
* * * *
Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.