PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

The Prediction Society: AI and the Problems of Forecasting the Future – FINAL VERSION

Article The Prediction Society
I’m thrilled to announce the release of the final version of my article.

The Prediction Society: AI and the Problems of Forecasting the Future

2025 Illinois Law Review 1 (2025)


Abstract:

Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. This type of algorithm is commonly referred to as artificial intelligence (AI). Increasingly, important decisions about people are being made based on these algorithmic predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsifiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.

More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future.

Privacy and data protection law do not adequately address algorithmic predictions. Many laws lack a temporal dimension and do not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Many laws provide correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions.

We argue that the use of algorithmic predictions is a distinct issue warranting different treatment from other types of inference. We examine the issues laws must consider when addressing the problems of algorithmic predictions.

Download Button

Continue Reading

Privacy Scholarship News

Privacy Scholarship News

I have a few items of scholarship news to share.

Divider 01

SSRN Downloads: A Personal Milestone

Daniel Solove SSRN 550K

I’m excited and grateful for this article discussing a milestone I reached by surpassing 500K SSRN downloads. The only other law professor with more than 500K downloads is Cass Sunstein.

Check out the article for more details.

Divider 01

Continue Reading

Webinar – AI and Privacy Collide: Why 2025 Will Rewrite the Rules Blog

Webinar AI and Privacy Collide

AI presents significant challenges to privacy. How should the clash be resolved?  This webinar covers the key events of 2024 and why 2025 will be a pivotal year for rewriting the rules for AI and privacy. We discussed conflicting regulatory trends, such as the dramatic rise of AI bills in the U.S. states, the increasing regulation of AI in the EU and around the world, and the new administration’s deregulatory agenda.

Speakers include:

 

Button Watch Webinar 02

Continue Reading

Webinar – Privacy Litigation

Webinar Privacy Litigation

Privacy litigation is on the rise. In this webinar, we discuss cases arising out of the use of tracking technologies and other types of technologies such as biometric identification. We will talk about the Illinois Biometric Information Privacy Act (BIPA), Video Privacy Protection Act (VPPA), California Invasion of Privacy Act (CIPA), and Telephone Communication Protection Act (TCPA), and other laws.

Speakers include:

 

Button Watch Webinar 02

Continue Reading

Cartoon: AI Predictions

Cartoon AI Predicts Human History - TeachPrivacy Training 02 JPG

Here’s a cartoon on AI predictions.

With Hideyuki Matsumi, I have written quite critically of the use of AI algorithmic predictions for human behavior:

The Prediction Society: Algorithms and the Problems of Forecasting the Future
2025 University of Illinois Law Review (forthcoming 2025) (with Hideyuki Matsumi)

Continue Reading

New Proposed HIPAA Security Rule Changes

Health Data

The Office for Civil Rights (OCR) at the U.S. Department of Health and Services (HHS) has a HIPAA holiday present – new proposed HIPAA Security Rule changes. These are not minor changes but a big revision. This new proposed rule is due in part to the fact that the healthcare industry has been brutally attacked by ransomware hackers and others for years.

The proposed rulemaking is here.

Here are a few key changes, quoted from the HHS press release:. Note that this is not the complete list from the press release, just some things I found notable:

Continue Reading

Cartoon: AI Regulation

Cartoon AI CEO Regulate - TeachPrivacy Training 02 JPG

Remember back in 2023, when AI company CEOs called for AI regulation? This cartoon is based on that call and what happened thereafter.

In 2023, Open AI CEO Sam Altman testified before Congress about the need for AI regulation. A group of AI company leaders encouraged AI regulation in a closed-door Senate meeting.  Elon Musk said: “The consequences of AI going wrong are severe so we have to be proactive rather than reactive.”

In 2024, regulators sprung into action. Of course, Congress didn’t do anything, but AI regulation got off to a quick start in 2024, and it looks as though AI regulation will be developing even faster in 2025. There were 700 AI and AI-related bills in the U.S. in 2024.  Considerable regulatory activity for AI is occurring around the world.

I wonder how serious the AI company CEOs were when they called for regulation. I am skeptical any time a corporate CEO says “regulate us” because corporate CEOs like regulation about as much as a cat likes a bath.

For more on AI and privacy – including my roadmap for how these issues should be regulated, see:

Continue Reading

Privacy in Authoritarian Times

Privacy in Authoritarian Times 01 JPEG

I just published an op-ed in the Boston Globe entitled “States can fight authoritarianism by shoring up privacy laws.” Boston Globe (Dec. 23, 2024). It’s paywalled, but I’m allowed to repost it, so here it is below. I’m working on a law review article on this topic, and I hope to have a draft in the next month or so. Please stay tuned.

Crackdowns on immigrants. Surveillance of abortion providers and abortion seekers. Harassment of critics. These maneuvers in the demagogue’s playbook would be harder to pull off with better digital privacy.

As the United States and much of the world turn back to the darkness of authoritarianism that blighted the previous century, we must remember that privacy is one of the bulwarks against the power of authoritarian governments. Unfortunately, we’re living in a more intensive surveillance society than ever before, and our privacy laws are ill equipped to protect us against government access to our personal data.

Privacy in Authoritarian Times 04a

In the years to come, the federal government and many state governments might engage in surveillance and data gathering as they round up immigrants, punish people for seeking, providing, or assisting abortions, and attack gender-affirming health care. The government might use personal data in its effort to retaliate against those who stand in its way. Such efforts might be assisted by mobs of vigilantes who will use personal data to dox, threaten, embarrass, and harm anyone they don’t like — much like the way many people eagerly assisted totalitarian regimes in finding “undesirables” and rooting out and punishing dissenters.

Our best hope for protection is that legislators in Massachusetts and other states who are concerned about these risks take steps now to upgrade their privacy laws.

Continue Reading