This past weekend Section 230 turned 30 years old. In those 30 years it has proven to be a marvelous yet misunderstood law, often gravely, as too many, including in Congress and the courts, mistakenly blame it for all the world’s ills, or at least those that happen in some connection with the Internet. When in reality, Section 230 is not why bad things happen online, but it is why good things can happen. And it’s why repealing it, or even “just” “reforming” it, will not stop the bad, but it will stop the good.

Unfortunately, even 30 years in, these ignorant efforts to diminish or even outright delete the law continue, despite the harm that would result if they succeeded. Which is why this anniversary seems like a good time to review why many of the reasons why the hostility towards Section 230 is so misplaced. Here at Techdirt we’ve collectively all spilled a lot of digital ink over the years about why Section 230’s critics are wrong to condemn it, and not just a little bit but completely and utterly, as well as counter-productively. But on this celebratory occasion I thought it would be fun to look back on what I personally have written about Section 230—at least since its 20th birthday celebration and the piece I wrote then—and collect some of these “greatest hits” in a post to help get anyone new to thinking about Section 230, who may be unsure why those pushing to repeal it is so misguided, caught up on why Section 230 is not a law we should be messing with.

What Section 230 does. One reason that people get Section 230 wrong is that there are a lot of myths about it and what it does or does not do. A good place to start is with an overview of how it generally works, and if you like watching videos you can watch this presentation from a few years ago where I gave a crash course in its operation.

In short, though, Section 230 immunizes platform providers from liability in two key ways: for liability in what their users use their services for, and for liability that could possibly result in how they moderate their users’ use of their services. Section 230 aligns platforms providers with Congress and makes it possible for them to work towards what Congress wants—the most good material online, and the least bad—by making it legally possible for the providers to do the best they can to achieve it on both fronts. If it is legally safe for them to allow user expression, because they won’t have to fear being liable for it, they will allow the most good expression, and if it is legally safe for them to remove user expression, because they won’t have to fear being liable for their moderation, then, as this post explains, they will be able to remove the most that is bad.

But Section 230 is not some sort of special favor for Big Tech, as some have suggested. It’s not even one for startups, as others have alleged. In fact, it applies to regular people as much as it applies to anyone. Rather than it being any sort of subsidy, it instead operates more like a rule of civil procedure to make sure that platforms cannot be drained of resources having to defend themselves for whatever wrong a user’s conduct is accused. Which is also why “reforming” Section 230 effectively means repealing it, because nearly all the proposed reforms would make the statutory protection more conditional, but if platforms are unsure about whether they are protected or not and in jeopardy of having to litigate the question, then for all intents and purposes they are effectively unprotected, and they will act accordingly to defensively either deny more beneficial content, or leave up too much that is harmful (or both).

When Section 230 applies. One of the common myths about Section 230 is that it prevents anyone from ever being held responsible for how the Internet has been used. Not so; Section 230 does nothing to prevent anyone from being accountable for their own behavior. What it does not allow, however, is someone else being held accountable, namely the provider of the platform service they used, because, as discussed above, if the platform could have to answer for how any of their users used their services, they would never be able to offer their services, and if they couldn’t offer their services then there would be no Internet for anyone to use even for any of the good, useful, or important things we use it for.

Section 230 also doesn’t immunize platforms for their own actions, only those of their users. The issue sometimes is in telling the two apart, but as this post argues, it’s not actually as hard to figure out as some people would insist. First, the idea that there is some publisher/platform distinction is a fiction; the only thing that matters is whether the immune provider is providing an interactive computer service of some sort and someone else has provided the content, or if the platform has provided the content itself. And in the event we get confused about who the content provider is, we can look to see who imbued the offending expression with its allegedly wrongful quality, which more often than not is the user and not the platform. As we’ve understood since the Roommates.com case, that a platform has simply welcomed the expression isn’t enough to put the platform on the hook for it.

Furthermore, the type of content a platform might be immune for intermediating can be myriad, including online advertising, which is expression provided by others and then intermediated by a platform (despite what certain state governments think), online dating sites, or online marketplaces—although there have been some issues getting the courts to consistently recognize how Section 230 should apply in that context, even though the statutory history supports it. Although sometimes they still do.

Why Section 230 is important. Regulators can be tempted to take swings at Section 230 because it can be tempting to try to control what can be said on the Internet, and Section 230 gets in the way of those efforts. While the First Amendment also protects platforms’ ability to choose what user expression to facilitate, Section 230 makes that protection meaningful by making those choices practically possible. When they cannot be freely made, then the user expression they facilitate takes a hit.

Which is why efforts to change Section 230 are a problem, because of all the collateral damage they will cause to online expression. But for some regulators, that censorship is the goal and why they have Section 230 in their sights. They want to prevent online expression, because too often it is online expression they don’t like. And, indeed, sometimes the speech is unfortunate, potentially even actionable.

But eliminating Section 230 is no solution at all. If we take away platforms’ ability to be platforms, then we take away everyone’s ability to use them to speak, no matter how important what they have to say is. It’s why we need to defend Section 230, even when it’s hard. There are always things that need to be said online, especially when we need to speak truth about power. Section 230 means we can. And we’d miss it if we couldn’t.


From Techdirt via this RSS feed