News, Developments, and Insights

high-tech technology background with eyes on computer display


In an interesting and thoughtful critique of Danielle Citron’s Cyber Civil Rights, Michael Froomkin argues that Danielle’s proposal to require ISPs to maintain records of IP addresses will spell “the complete elimination of anonymity on the US portion of the Internet in order to root out hateful speech.” Anonymous speech should be strongly protected, as it is key to allowing people to express themselves candidly and openly, without fear of reprisal. It is especially important to promote dissenting views that are outside the mainstream of conventional thought. But the key issue with anonymity online is: How much do we want to protect it? Anonymous speech can lead to harmful defamation, invasion of privacy, intentional infliction of emotional distress, as well as criminal conduct, such as the spread of child porn. Is there a way to protect anonymity yet not let it get too out of hand?

In Chapter 6 of The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (available free online), I distinguished between anonymity vs. traceability. Anonymity, defined broadly to also encompass the use of pseudonyms, is the ability to speak without publicly linking one’s words to one’s real name. Traceability is something different — it involves the ability to trace one’s words back to one’s identity. In my book, I propose that we have traceable anonymity — we should preserve people’s right to speak anonymously yet that doesn’t mean that people should be able to be untraceable. In the event of a compelling reason, anonymous speakers should be unmasked. Practically, most people don’t know how to be untraceable. But what if it were easier? What if people could speak anonymously and never be tracked down, no matter how harmful or criminal their speech or dissemination of information online might be?

Citron’s argument, as I understand it, is for improving traceability, not for eliminating anonymity. She wants to make it easier to link people to their anonymous comments only if these comments cause tortious or criminal harm.

As I see it, the primary arguments against traceability are:

1. There is nothing that one can say online that should be legally restricted. “Sticks and stones can break my bones but words can never hurt me.” Traceability is therefore not necessary and is harmful because it is in the service of promoting tort or criminal limitations to speech online.

2. Courts too readily allow for the unmasking of anonymous speakers. Such unmasking should be done only in the most compelling situations, where there is no other alternative and where the speakers have clearly engaged in tortious or criminal conduct. Judicial standards for linking IP addresses to speakers are too varied, and many standards are far too underprotective of anonymity. Therefore, until the standards are sufficiently high enough and well-settled, traceability should not be promoted.

3. Even with a very strict standard for unmasking anonymous speakers, there’s a danger with ISPs retaining information for traceability purposes. There’s always a risk of function creep, and of the government seeking to obtain such records for nefarious purposes. To truly preserve anonymity, one must prevent traceability, since people can only be fully assured that they’re truly unfettered in saying unpopular views if there is no risk in the future that either the standard for unmasking them will be lowered or the government might seek their identities for persecution. The only way to fully ensure against these risks is to allow people to be untraceable.

Variants of argument 1 are often made to support untraceable anonymity, but I find this argument to be rather weak. The law has, by and large, rejected argument 1 — at least in many contexts. But I find arguments 2 and 3 quite compelling, though I still favor traceable anonymity. I believe, however, in order to develop a system of traceable anonymity, such a system must respond to arguments 2 and 3. Is there such a way to develop such a system? Is there a response to arguments 2 and 3? Are there other arguments I’m missing?

Originally Posted at Concurring Opinions

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy training, data security training, HIPAA training, and many other forms of awareness training on privacy and security topics. Professor Solove also posts at his blog at LinkedIn. His blog has more than 1 million followers.

Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum and International Privacy + Security Forum, annual events designed for seasoned professionals.

If you are interested in privacy and data security issues, there are many great ways Professor Solove can help you stay informed:
LinkedIn Influencer blog

TeachPrivacy Ad Privacy Training Security Training 01