There have been quite a number of state HIPAA enforcement cases this year, and one expert points out a trend toward increasing state enforcement of HIPAA.
An article in Data Breach Today discusses a number of state HIPAA enforcement cases. Here are some of the ones discussed:
Massachusetts — $75,000 settlement with McLean Hospital for a data breach involving 1,500 victims based on an employee who routinely took home unencrypted backup tapes with PHI. From the state press release:
The AG’s complaint alleges that McLean, a psychiatric hospital in Belmont, allowed an employee to regularly take home eight unencrypted back-up tapes containing clinical and demographic information from the Harvard Brain Tissue Resource Center that the hospital possessed. The tapes contained personal information such as names, social security numbers, diagnoses and family histories. When the employee was terminated from her position at McLean in May 2015, she only returned four of the tapes, and the hospital was unable to recover the others.
New Jersey — $100,000 settlement with EmblemHealth for a 2016 breach involving 81,000 victims. Details from the state’s press release:
The incident at issue took place on October 3, 2016 when EmblemHealth’s vendor sent a paper copy of EmblemHealth’s Medicare Part D Prescription Drug Plan’s Evidence of Coverage to 81,122 of its customers, including 6,443 who live in New Jersey.
The label affixed to the mailing improperly included each customer’s HICN, which incorporates the nine digits of the customer’s Social Security number, as well as an alphabetic or alphanumeric beneficiary identification code. (The number shown was identified as the “Package ID#” on the mailing label and did not include any separation between the digits.)
During its investigation, the Division found that following the departure of the EmblemHealth employee who typically prepared the Evidence of Coverage mailings, the task was assigned to a team manager of EmblemHealth’s Medicare Products Group, who received minimal training specific to the task and worked unsupervised. Before forwarding the data file to the print vendor, this team manager failed to remove the patient HICNs from the electronic data file.
In the annals of what must be one of the most ridiculous data security incidents, a law firm employee sent a client file on an unencrypted thumb drive in the mail. The file contained Social Security information and other financial data.
The envelope arrived without the USB drive. The firm contacted the post office.
What happened next is most bizarre. Here’s an excerpt from the law firm’s letter notifying the state attorney general:
Cybersecurity litigation is currently at a crossroads. Courts have struggled in these cases, coming out in wildly inconsistent ways about whether a data breach causes harm. Although the litigation landscape is uncertain, there are some near certainties about cybersecurity generally: There will be many data breaches, and they will be terrible and costly. We thus have seen the rise of cybersecurity insurance to address this emergent and troublesome risk vector.
I am delighted to be interviewing Kimberly Horn, who is the Global Focus Group Leader for Cyber Claims at Beazley. Kim has significant experience in data privacy and cyber security matters, including guiding insureds through immediate and comprehensive responses to data breaches and network intrusions. She also has extensive experience managing class action litigation, regulatory investigations, and PCI negotiations arising out of privacy breaches.
In the period of just a week, California passed a bold new privacy law — the California Consumer Privacy Act of 2018. This law was hurried through the legislative process to avoid a proposed ballot initiative with the same name. The ballot initiative was the creation of Alastair Mactaggart, a real estate developer who spent millions to bring the initiative to the ballot. Mactaggart indicated that he would withdraw the initiative if the legislature were to pass a similar law, and this is what prompted the rush to pass the new Act, as the deadline to withdraw the initiative was looming.
The text of the California Consumer Privacy Act is here. The law becomes effective on January 1, 2020.
There are others who summarize the law extensively, so I will avoid duplicating those efforts. Instead, I will highlight a few aspects of the law that I find to be notable:
(1) The Act creates greater transparency about the personal information businesses collect, use, and share.
(2) The Act provides consumers with a right to opt out of the sale of personal information to third parties and it attempts to restrict penalizing people who exercise this right. Businesses can’t deny goods or services or charge different prices by discounting those who don’t opt out or provide a “different level or quality of goods or services to the consumer.” However, businesses can do these things if they are “reasonably related to the value provided to the consumer by the consumer’s data.” This is a potentially large exception depending upon how it is interpreted.
(3) The Act allows businesses to “offer financial incentives, including payments to consumers as compensation,” for collecting and selling their personal information. Financial incentive practices cannot be “unjust, unreasonable, coercive, or usurious in nature.” I wonder whether this provision will undercut the restriction on offering different pricing or levels of service in exchange for people allowing for the collection and sale of their information. Through some clever adjustments, businesses that were enticing consumers to allow the collection and sale of their personal data through different prices or discounts can now restructure these into “financial incentives.”
I hope you enjoy my latest cartoon about data security — a twist on the angel on one shoulder and devil on the other. Humans are the weakest link for data security. Attempts to control people with surveillance or lots of technological restrictions often backfire. I believe that the most effective solution is to train people. It’s not perfect, but if training is done right, it can make a meaningful difference.
I hope you enjoy my latest cartoon about passwords on the Dark Web. These days, it seems, login credentials and other personal data are routinely stocking the shelves of the Dark Web. Last year, a hacker was peddling 117 million LinkedIn user email and passwords. And, late last year, researchers found a file with 1.4 billion passwords for sale on the Dark Web. Hackers will have happy shopping for a long time.
In In re Zappos.com, Inc., Customer Data Security Breach Litigation (9th Cir., Mar. 8, 2018), the U.S. Court of Appeals for the 9th Circuit issued a decision that represents a more expansive way to understand data security harm. The case arises out of a breach where hackers stole personal data on 24 million+ individuals. Although some plaintiffs alleged they suffered identity theft as a result of the breach, other plaintiffs did not. The district court held that the plaintiffs that hadn’t yet suffered an identity theft lacked standing.
Standing is a requirement in federal court that plaintiffs must allege that they have suffered an “injury in fact” — an injury that is concrete, particularized, and actual or imminent. If plaintiffs lack standing, their case is dismissed and can’t proceed. For a long time, most litigation arising out of data breaches was dismissed for lack of standing because courts held that plaintiffs whose data was compromised in a breach didn’t suffer any harm. Clapper v. Amnesty International USA, 568 U.S. 398 (2013). In that case, the Supreme Court held that the plaintiffs couldn’t prove for certain that they were under surveillance. The Court concluded that the plaintiffs were merely speculating about future possible harm.
Early on, most courts rejected standing in data breach cases. A few courts resisted this trend, including the 9th Circuit in Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010). There, the court held that an increased future risk of harm could be sufficient to establish standing.
Recently, South Dakota and Alabama passed data breach notification laws. These were the last two states to pass such laws, and now all 50 states have breach notification laws. There’s also a federal breach notification requirement under HIPAA (passed with the HITECH Act of 2009).
In 2003, California passed the first data breach notification law. The law didn’t get a lot of attention until the ChoicePoint data breach was announced in 2005. That breach attracted national media attention largely because people started receiving notification letters in the mail. Other states started to follow California’s lead, passing their own breach notification laws. Now, just 15 years later, a milestone has been reached with all 50 states having breach notification laws. Washington, DC also has a breach notification law.
There still is no omnibus federal breach notification statute — just the requirement for health data (protected health information) under HIPAA. Other countries have started to jump on the notification bandwagon. Canada will have a breach notification requirement starting on November 1, 2018. In the EU, the GDPR has a breach notification requirement.
I have mixed feelings about breach notification laws. On the pro side, they have shed a lot of light on data breaches, which used to remain hushed up. The bright light has shown us just how woeful the state of data security is. Individuals have learned a lot from the process as well, including how often their data is affected.
But on the con side, breach notification laws are a great expense to comply with, amounting to a de facto strict liability fine on organizations that suffer a breach. The expense is the same no matter whether a company was careful, negligent, or even reckless with regard to its data security. But the most problematic thing about breach notification laws is that they have put so much focus on breach response when so many other dimensions of data security are being neglected. Many policymakers have looked to breach notification as the primary policy response to the problem of data security, but breach notification alone is far from a solution.
Professor Woodrow Hartzog and I are currently working on a book that will explore these issues, so please stay tuned.
My new article was just published: Risk and Anxiety: A Theory of Data Breach Harms, 96 Texas Law Review 737 (2018). I co-authored the piece with Professor Danielle Keats Citron. We argue that the issue of harm needs a serious rethinking. Courts are too quick to conclude that data breaches don’t create harm. There are two key dimensions to data breach harm — risk and anxiety — both of which have been an area of struggle for courts.
Many courts find that anything involving risk is too difficult to measure and not concrete enough to constitute actual injury. Yet, outside of the world of the judiciary, other fields and industries have recognized risk as something concrete. Today, risk is readily quantified, addressed, and factored into countless decisions of great importance. As we note in the article: “Ironically, the very companies being sued for data breaches make high-stakes decisions about cyber security based upon an analysis of risk.” Despite the challenges of addressing risk, courts in other areas of law have done just that. These bodies of law are oddly ignored in data breach cases.
When it comes to anxiety — the emotional distress people might feel based upon a breach — courts often quickly dismiss it by noting that emotional distress alone is too vague and unsupportable in proof to be recognized as harm. Yet in other areas of law, emotional distress alone is sufficient to establish harm. In many cases, this fact is so well-settled that harm is rarely an issue in dispute.
We aim to provide greater coherence to this troubled body of law. We work our way through a series of examples — various types of data breach — and discuss whether harm should be recognized. We don’t think harm should be recognized in all instances, but there are many situations where we would find harm where the majority of courts today would not.
The article can be downloaded for free on SSRN.
Here’s the abstract:
It’s time for another installment of the funniest hacker stock photos. Because I create information security awareness training (and HIPAA security training too), I’m always in the hunt for hacker photos.
For this round, I focus on the future of hacking, so I looked closely for hacker stock photos that depicted the most state-of-the-art hacking techniques as well as a glimpse into the future.
If you’re interested in the previous posts in this series see:
The Funniest Hacker Stock Photos 3.0
The Funniest Hacker Stock Photos 2.0
The Funniest Hacker Stock Photos 1.0
Here are this year’s pictures. Enjoy!
Hacker Stock Photo #1
This guy might be one of the creepiest hackers I’ve ever seen.
And, he’s part of a new Las Vegas musical act called “Hacker Man Group”
Hacker Stock Photo #2
I am quite confused about why this hacker needs a magnifying glass if he’s wearing a virtual reality headset. How does he even see the magnifying glass? I guess this is a twist on The Matrix, as he appears to have the powers to warp time and space.