“It’s just a flesh wound.”
Over at Privacy & Security Source, Andrew Serwin, a leading privacy lawyer and author of an excellent treatise on privacy law, has a very thoughtful and informative post [link no longer available] about cases where courts found no harm to individuals by data security breaches. Serwin observes:
Virtually every case supports the view that most privacy breaches will not support civil liability because harm typically does not exist.
There are at least three general bases upon which plaintiffs argue they are injured by a data security breach:
1. The exposure of their data has caused them emotional distress.
2. The exposure of their data has subjected them to an increased risk of harm from identity theft, fraud, or other injury.
3. The exposure of their data has resulted in their having to expend time and money to prevent future fraud, such as signing up for credit monitoring, contacting credit reporting agencies and placing fraud alerts on their accounts, and so on.
Courts have rejected all three of these arguments. In many data security breach cases, courts are dismissing claims not because companies practiced reasonable security and weren’t negligent — indeed, in many cases, companies were grossly negligent, even reckless. I’m continually stunned by how shoddy security practices keep occurring — such as the all-too-common lost laptop with millions of unencrypted records of consumer data. Instead, courts are dismissing cases even in the face of negligence (or worse) because they conclude that people aren’t really harmed by the exposure of their data.
Serwin’s post discusses the recently-decided case, In re Hannaford Bros. Data Security Breach Litigation (Maine Supreme Court, Sept. 21, 2010) where the court examined the third argument above:
The plaintiffs here have suffered no physical harm, economic loss, or identity theft. As the federal district court recognized, actual injury or damage is an element of both negligence and breach of contract claims. . . .
Our case law, therefore, does not recognize the expenditure of time and effort alone as a harm. The plaintiffs contend that because their time and effort represented reasonable efforts to avoid reasonably foreseeable harm, it is compensable. However, we do not attach such significance to mitigation efforts. . . . Unless the plaintiffs’ loss of time reflects a corresponding loss of earnings or earning opportunities, it is not a cognizable injury under Maine law of negligence.
I find it troubling that courts won’t recognize harm in a data security breach. In an earlier post about the careless granting of credit, I wrote that one of the problems with courts failing to find any harm is that it makes it cost-effective for companies to fail to invest in adequate security practices:
The reason so much identity theft occurs is because it is cheaper to expose people to the risk of identity theft than to exercise more care in vetting credit applications. Courts and legislatures are also to blame, for they fail to adequately recognize the harm of identity theft (or data breaches) and will not make companies internalize the full costs. So the companies do their cost-benefit analysis and conclude that they can expose people to the risk of identity theft because many costs are external — and if people sue, courts won’t recognize them.
People really do suffer real emotional distress because of a data security leak. In other contexts, courts readily recognize emotional distress alone as a cognizable injury. Suppose instead of leaking a person’s data, it leaked a person’s nude photo. In many cases under the public disclosure of private facts tort, courts have no trouble at all recognizing an injury when a person’s nude photo is disclosed — even when it causes the person no reputational harm or financial injury. The harm is merely embarrassment and emotional distress.
A data security breach does make people worse off by subjecting them to future risk. They are made more vulnerable. Imagine I own two safety-deposit boxes. I want to rent them. For Box 1, I have lost the key. For Box 2, I haven’t. Is Box 1 really worth the same as Box 2? If I remove the locks to your doors in your house, but there’s no burglar yet or intruder, is there no harm to you? I think there is — you’re clearly worse off. Or suppose a company harms people by completely removing their immune systems. But those people don’t get sick. Are they not harmed?
One concern is that setting the standard for harm too low will invite excessive litigation. One of the problems with data security these days is what I call the “multiplier problem.” Entities have data on so many people that when there’s a leak, millions could be affected, and even a small amount of damages for each person might add up to a penalty that threatens a company’s business. We live in a world where it is relatively easy for a company to affect millions of people this way.
But we generally make companies who cause wide-scale harm pay for it. When BP has an oil leak, we demand they pay for the cleanup as well as compensate people for its effects. The sentiment behind this demand is that BP was profiting by engaging in its activities, and it shouldn’t make other people suffer if it didn’t provide adequate safety. In other words, BP must own the harm it created.
But with a data leak, courts are saying that companies should be off the hook. They get to use data on millions of people without any consequences if they cause harm.
This is a problem. Danielle Citron’s thoughtful paper, Reservoirs of Danger, argued that those keeping data should be treated similarly to those engaging in hazardous activities. I agree. If you’re going to profit by using people’s data, you should at least be held responsible for compensating people when you fail to keep it secure.
But the multiplier effect is definitely something we can’t ignore. Data is so easy and cheap to collect today, and it is possible for so many companies to affect so many people. We don’t want to put them out of business for a data security breach that causes only minor harm but to so many people that it adds up to an insane number.
I don’t think the answer is to make it hard or impossible for people to prove harm. Our litigation system, unfortunately, has become too costly and out of control, so that’s a problem, but there must be a way to allow people to be compensated within reason, without bankrupting a company. Maybe the law should require some kind of liquidated damages (but limited) per person, with an exception for incidents of extraordinary harm.
Another approach might be to find a way to adjudicate cases that is less cumbersome and expensive, where most of the money goes to compensate people.
Or maybe we should require companies that collect data to pay into a general fund which would be administered by the government to compensate people (something like worker’s compensation). The payment would be like an insurance premium, which could be higher or lower based on whether a company followed industry standards, how much data was held, how sensitive, and whether a company had a breach in the past.
For more on this topic, Serwin’s article, Poised on the Precipice, is well worth reading.
Originally Posted at Concurring Opinions
* * * *
This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy training, data security training, HIPAA training, and many other forms of awareness training on privacy and security topics. Professor Solove also posts at his blog at LinkedIn. His blog has more than 1 million followers.