PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

The Prediction Society - Algorithms and the Problems of Forecasting the Future 02

I am excited to share my new paper draft with Hideyuki (“Yuki”) Matsumi, The Prediction Society: Algorithms and the Problems of Forecasting the Future.  The paper is available for free on SSRN.

Download Article

Yuki is currently pursuing a PhD at Vrije Universiteit Brussel. Yuki began his career as a technologist, then turned to law, where he has been exploring predictive technologies for more than a decade. The origins of this article trace back to 2011, when Yuki was my student. I supervised Yuki’s thesis about predictive technologies. His work was way ahead of its time. I am beyond thrilled to join him now on exploring these issues. Writing this paper with Yuki has been a terrific experience, and I have learned a tremendous amount in working with him.

We aim to add a unique and important contribution to the discourse about AI, algorithms, and inferences by focusing specifically on predictions about the future. We argue that the law should recognize algorithmic predictions about the future as distinct from inferences about the past and present. Algorithmic predictions about the future present a special set of problems that aren’t addressed by the law. The law’s existing tools and rights are ill-suited for predictions. We examine in depth the issues the law must consider when addressing these problems.

I’m really happy about how the paper turned out, and I want to note that I played but a supporting role.  Yuki has been the driving force behind this paper.  I joined because I find the issues to be fascinating and of the utmost importance, and I believe we have something important to add to the discussion. We welcome feedback.

Here’s the abstract:

Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.

More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future.

Privacy law fails to address algorithmic predictions. Privacy law lacks a temporal dimension and does not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Privacy law is entrenched with dichotomies that do not work with predictions. For example, privacy law is framed around a truth-falsity dichotomy. The law provides correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions.

We argue that the law must recognize the use of algorithmic predictions as a distinct issue warranting different treatment than other privacy issues and other types of inference. We then examine the issues the law must consider when addressing the problems of algorithmic predictions.

Download Article

* * * *

Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.

Subscribe to Solove’s Free Newsletter
Professor Solove's Newsletter on Privacy and Security

Prof. Solove’s Privacy Training: 150+ Courses

TeachPrivacy Privacy Awareness Training 03a