When Donald Trump targeted the Communications Decency Act (CDA) Section 230, a debate about the law flared up. Numerous reforms were proposed, some even seeking to abolish the law. Unfortunately, the debate has been clouded with confusion and misinformation.
Although I disagree with many of the proposals to reform it or abolish Section 230, I have long believed that it has problems. A decade ago, I critiqued Section 230 extensively in my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (2007) (free download here).
The CDA Section230, at 47 U.S.C. § 230(c)(1), provides:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The actual text of the law is fine, and I wouldn’t change it. My proposal for reform would be for Congress to reissue Section 230 with the same text and instruct courts to follow the actual text of the law. The problem with Section 230 is that in a bout of free speech zeal, courts have interpreted the law to be far more extensive than it is written or should be.
Why Section 230 Was Created
To understand Section 230, we must understand the context in which it was created. Section 230 was included in the Communications Decency Act (CDA) as a response to an early case of defamation on the Internet. Before getting to this case, it is important to understand how defamation law works. There are three categories of defendants:
Speaker – the original source who stated the defamatory statement
Publisher – one who repeats the defamatory statement
Distributor – one who distributes or sells defamatory material (examples: news vendors, libraries, and booksellers)
Speakers are liable for the defamatory statements they make. Publishers who repeat the statement are as liable as the original speaker. But distributors are liable only if they knew that the material they distributed is defamatory or should have known.
An issue that arose in early online defamation cases involved whether an Internet Service Provider was a speaker, publisher, or distributor. In Cubby, Inc. v. Compuserve (SDNY 1991), a gossip column called Rumorville published defamatory comments about the plaintiff. The plaintiff sued CompuServe, the ISP that hosted Rumorville. The court concluded that CompuServe was a distributor. CompuServe merely hosted Rumorville on its server and had no more editorial control than a library, bookstore, or newsstand.
However, in Stratton Oakmont v. Prodigy (NY Sup. 1995), a very early case involving online defamation, an anonymous person posted on an online bulletin board some defamatory statements about a securities investment firm (Stratton Oakmont) and its president (Daniel Porush). The bulletin board was hosted by Prodigy, an Internet Service Provider. The court concluded that Prodigy was a speaker or publisher. Prodigy held itself out as an ISP that exercised editorial control since it touted that it was “family oriented” and had similar editorial control as a newspaper.
When Congress was passing the CDA, it included Section 230 to further the goals of that statute – to promote decency online. The CDA was generally a rather foolish and ill-informed law, as most of it was clearly a violation of the First Amendment. Although the Internet of the 1990s was a cesspool of porn, the CDA’s approach to restrict offensive speech was unequivocally invalid under the First Amendment, leading to a 9-0 U.S. Supreme Court decision striking down much of the law. It is surprising that Congress wasted so much effort on this law when anyone who knew anything about the First Amendment could say without hardly any doubt that the law wouldn’t fly. Section 230, however, survived the wreckage after the Supreme Court’s decision.
Why was Section 230 included in a statute that was about promoting decency in Internet speech? The reason is that Congress wanted to encourage ISPs to police content and clean up the yucky stuff. Stratton Oakmont punished Prodigy for doing this; had Prodigy not edited content to be family friendly, then it would have escaped from speaker/publisher liability. But because Prodigy tried to keep things clean, it was worse off than Compuserve which didn’t do anything.
With this background, the text of Section 230 should now be quite clear: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In essence, Section 230 is overruling Stratton Oakmont and is making it clear that the rule in Cubby should control for all ISPs, including those like Prodigy that exercise editorial control.
This rule is a good rule. IPSs and platforms (such as Facebook, Twitter, and other social media companies) should not be punished for exercising editorial control. It is hard for these companies to know about all of the content being added by so many different users. These platforms and ISPs are distributors, not speakers or publishers.
Zeran v. AOL: How Section 230 Was Transformed
Into a Free Speech Cudgel
An early case involving Section 230 turned it into something quite different from what its text actually said. The case was Zeran v. AOL , 129 F. 3d 327 (4th Cir. 1997). After the Oklahoma City bombing, an anonymous person posted a message on an AOL bulletin board advertising “Naughty Oklahoma T-Shirts” making jokes about the bombing. The message said interested people should call “Ken” at Kenneth Zeran’s home phone number. Zeran received tons of calls and death threats. He couldn’t change his home number because he used it for his business, which he ran out of home. Zeran called AOL and asked for the posting to be removed. But AOL didn’t remove the posting quickly, and more postings were made in subsequent days. Zeran repeatedly called AOL to take down the material and shut off the anonymous poster. But AOL didn’t remove everything. Zeran began receiving threatening calls every few minutes, and he had to have police protection. Because the identity of the person making the postings wasn’t ascertainable, Zeran had no recourse against this person. Zeran sued AOL for delaying in removing defamatory messages, refusing to post retractions, and for failing to screen for similar postings.
AOL argued that Section 230 immunized it from the lawsuit. Zeran argued that AOL should be liable as a distributor and that Section 230 didn’t restrict distributor liability. The court, however, concluded that although Section 230 didn’t talk about distributors, it nevertheless immunized against distributor liability. The court justified its broad reading of Section 230 as essential to free speech online: “Congress recognized the threat that tort‑based lawsuits pose to freedom of speech in the new and burgeoning Internet medium. The imposition of tort liability on service providers for the communications of others represented, for Congress, simply another form of intrusive government regulation of speech. Section 230 was enacted, in part, to maintain the robust nature of Internet communication and, accordingly, to keep government interference in the medium to a minimum.”
Zeran’s bold and activist interpretation of Section 230 caught on. Courts lined up behind Zeran, transforming Section 230 into an almost nuclear-level immunity from lawsuits. The result is that platforms and ISPs are immune even if they know comments are defamatory or invasive of privacy.
But contrary to the Zeran court’s bold proclamations, Section 230 wasn’t actually enacted to be a great inducer of unfettered free speech. It was enacted to encourage more ISPs to be like Prodigy and to police speech on their networks. Ironically, Section 230 promotes a culture of irresponsibility when it comes to speech online. For example, back in 2007, a website called JuicyCampus enticed college students to post salacious gossip about other students. JuicyCampus facilitated gossipers by promising them anonymity, and the site flaunted its Section 230 immunity. Indeed, as interpreted by Zeran, Section 230 rewards sites like JuicyCampus that profit from encouraging outrageous and harmful content, allowing these sites to escape from dealing with the consequences of the harm this content causes. Eventually, JuicyCampus shut down when its advertisers realized that being associated with the cesspool tarnished their brands and pulled their advertising dollars. Its founder, Matt Ivester, later wrote a book expressing regret about creating the site.
Restoring Section 230 to Its
Actual Text and Original Purpose
Section 230 wasn’t designed to be a radical promoter of unfettered free speech online. Instead, it was passed as part of a law that was designed to restrict speech. Section 230’s text, which states that platforms and ISPs should not be treated as speakers or publishers for the content provided by others, is sound. There is no problem with the actual text of Section 230. The problem is the zealous interpretation of Zeran and other courts.
If Section 230 were restored to what its text actually says, the main difference would be that distributor liability would be back. Distributor liability ensures that ISPs and platforms and other powerful hosts of online forums operate responsibly. There should be accountability, and platforms shouldn’t be allowed to create online spaces where anything goes.
Suppose someone posts a nude photo of you in a comment to my blog post. You ask me to remove it. I say: “Nah, I don’t care. Get lost!” Section 230 immunizes me. But it shouldn’t. Although someone else posted the photo, I provided the forum that allowed this person to post it. I have control over the logs that have the poster’s IP address that can be used to identify the poster. I have control over what information gets posted to my website. I also have built an audience, am disseminating information to that audience, and am making information available online for anyone in the world to see. I am also receiving benefits from having my blog and am monetizing it with ads. With these powers and financial rewards should come some responsibilities to ensure that I’m not causing harm to others.
Contrast this case to intellectual property law. When a company claims its content is being infringed, the law and courts sing a very different tune. Suddenly, there are no free speech concerns. If you owned the copyright to that nude photo on my blog, you could force me to take it down. Section 230 doesn’t apply to copyright.
What would happen if Section 230 didn’t eliminate distributor liability? Some might fear that the Internet would come to an end. Free speech would dry up. Everyone would be forced to speak in dulcet tones. Expression would lose all spice and become bland and boring. Hecklers would barrage ISPs and social media platforms to take down anything they didn’t like. Fearing liability, the ISPs and platforms would remove everything.
Certainly, there is a risk of a heckler’s veto. But it is far from a certainty that this will happen in any kind of scale to be worrisome. The law could address this concern by providing a safe harbor to companies with adequate content moderation. But currently, companies have a long way to go on thinking through thoughtful ways to engage in content moderation. Section 230’s answer, which is to eschew responsibility, is one that experience has shown to be deeply flawed. The writing is on the wall – the pressure will be on platforms to be more responsible with the power they wield, and the days of “anything goes” is coming to an end. Content moderation is hard; it involves thoughtful nuanced judgement. The first step is to force platforms and ISPs to recognize that they must invest a lot more time and resources into content moderation.
Restoring distributor liability also reduces Internet exceptionalism — the law’s treatment of the Internet with a different set of rules than every other form of media. Print, radio, and TV all play by a very different set of rules than platforms and ISPs; there is no Section 230 beyond the Internet, and free speech still survives offline. I am not proposing to get rid of Section 230, just to restore it to its actual text. Section 230 as actually written does a great job protection online speech. The extra punch that courts added to Section 230 – the wiping away of distributor liability – is not needed and leads to a more toxic online world.
As I wrote more than a decade ago in The Future of Reputation:
Although existing law lacks nimble ways to resolve disputes about speech and privacy on the Internet, completely immunizing operators of websites works as a sledgehammer. It creates the wrong incentive, providing a broad immunity that can foster irresponsibility. Bloggers should have some responsibilities to others, and Section 230 is telling them that they do not. There are certainly problems with existing tort law. Lawsuits are costly to litigate, and being sued can saddle a blogger with massive expenses. Bloggers often don’t have deep pockets, and therefore it might be difficult for plaintiffs to find lawyers willing to take their cases. Lawsuits can take years to resolve. People seeking to protect their privacy must risk further publicity in bringing suit.
These are certainly serious problems, but the solution shouldn’t be to insulate bloggers from the law. Unfortunately, courts are interpreting Section 230 so broadly as to provide too much immunity, eliminating the incentive to foster a balance between speech and privacy. The way courts are using Section 230 exalts free speech to the detriment of privacy and reputation. As a result, a host of websites have arisen that encourage others to post gossip and rumors as well as to engage in online shaming. These websites thrive under Section 230’s broad immunity.
The solution is to create a system for ensuring that people speak responsibly without the law’s cumbersome costs. The task of devising such a solution is a difficult one, but giving up on the law is not the answer. Blogging has given amateurs an unprecedented amount of media power, and although we should encourage blogging, we shouldn’t scuttle our privacy and defamation laws in the process. . . .
Words can wound. They can destroy a person’s reputation, and in the process distort that person’s very identity. Nevertheless, we staunchly protect expression even when it can cause great damage because free speech is essential to our autonomy and to a democratic society. But protecting privacy and reputation is also necessary for autonomy and democracy. There is no easy solution to how to balance free speech with privacy and reputation. This balance isn’t like the typical balance of civil liberties against the need for order and social control. Instead, it is a balance with liberty on both sides of the scale—freedom to speak and express oneself pitted against freedom to ensure that our reputations aren’t destroyed or our privacy isn’t invaded. . . .
The law currently takes a broadly pro–free speech stance on online expression. As a result, it fails to create any incentive for operators of websites to exercise responsibility with regard to the comments of visitors.
Balancing free speech with privacy and reputation is a complicated and delicate task. Too much weight on either side of the scale will have detrimental consequences. The law still has a distance to go toward establishing such a balance.
Section 230 should not be abolished. But it should be restored to its original meaning and purpose – a much more limited scope than it has now. Recovering distributor liability will lead to a better balance between free speech and protection against harm. It will promote greater responsibility for platforms and ISPs.
For more background about the CDA Section 230, see Professor Jeffrey Kosseff’s book, The Twenty-Six Words that Created the Internet (2019). Professor Eric Goldman’s writings are also well worth reading. He strongly supports the broad interpretation of Section 230 in Zeran. I don’t agree with Eric’s positions, but he is thoughtful and knowledgeable.
* * * *
This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training. He also posts at his blog at LinkedIn, which has more than 1 million followers.
Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum an annual event designed for seasoned professionals.