What’s the Deal with Digital Censorship?
Bringing an outdated legal framework into the 21st Century
The year is 1995. Setting your coffee down, you start up your eggshell-white Toshiba PC, equipped with the latest Windows 95 tech and 32-bit processor. You double-check no one in the house is using the telephone, lest they interfere with your *high-speed* dial-up connection. With about 20 minutes to kill while your computer ‘warms up’, you are suddenly struck by the true potential of the digital world. The young internet—abuzz with chatrooms, public forums, and text-based blogs—could someday become the primary medium through which all public debate and discussion occurs. Then, another sobering thought: If the internet is to become the proverbial ‘street corner’ or ‘market café’, how do we preserve the variety and openness of discussion that occurs there? Will our current, real-life free speech protections be enough to prevent subversion of the free market of ideas in the digital world?! Beginning to panic, you envisage a ‘Brave New Digital World’ in which the only views espoused on the World Wide Web are those deemed ‘acceptable’ by the dominant social class; all alternative perspectives are suppressed. But who will be responsible for this censure and suppression? Just before hyperventilation sets in, your trance is broken by the distinctive cyber-sound of your dial-up modem. A-ha! The identity of the villains in this dystopian future dawns on you. The Service Providers. You see it all: by the year 2020, fat-cats like Netcom and UUNET (major providers at the time) will use their digital data dominance to control public conversation and oppress dissenters! Something must be done!
Nearly 30 years later, these fears are coming to fruition though, rather than internet service providers, it is social media platforms and algorithmic search engines threatening free speech in the 21st century. But, how did we get here? The answer is that the legal framework governing censorship of digital content was designed in the 1990s, when the digital world looked much different. Dramatic changes to the size and makeup of the internet necessitate an important discussion about how to reform free speech protections to better protect free expression and First Amendment principles.
The first type of legal entity involved in this discussion are Internet Service Providers (ISPs). These entities, such as Verizon and AT&T, supply the public with access to the internet through cable, wireless, and fiber-optic connections. We will refer to the second type of entity as Internet Content Providers (ICPs). ICPs are the entities that own and govern websites and search engines, such as Meta, X, Google, and YouTube. ICPs provide internet users access to the ICPs’ servers, where users can view and interact with content published by the ICPs themselves, as well as by third party contributors.
Internet Service Providers (ISPs) and Net Neutrality Laws
Though the roles played by ISPs and ICPs differ, each is a kind of internet ‘gatekeeper’. Given this responsibility, an important legal discussion emerged in the 1990s around the ability of ISPs and ICPs to control and censor information. Legislators foresaw the internet becoming the dominant medium for news and discussion, but obviously had little idea how the eventual structure and landscape of the internet would look. A major concern was that ISPs—a small number of large, private companies—could control public discourse by hindering or directing the flow of certain information, thus subverting the laissez-faire dissemination of ideas. Although this has not proven to be the case, the fear was not totally unfounded. One can certainly imagine a world where an ISP, such as AT&T, intentionally blocks access to Breitbart.com, for example, covertly reducing Breitbart’s viewership and suppressing the news and viewpoints published there.
Consequently, ISPs are governed by a body of law designed to ensure the “best content and applications emerge as a result of user preference rather than provider favoritism.”[1] The Telecommunications Act of 1996 prohibited censorship by ISPs to encourage the free and open spread of information, preventing ISPs from blocking, slowing down, or charging different rates based on content.[2][3] Though other concerns are implicated in this discussion, the primary concern of Net Neutrality was to ensure ISPs remain pure intermediaries and provide access to internet content free from “intermediate error checking or filtering”.[4]
Internet Content Providers (ICPs) and Section 230
ICPs, on the other hand, were the subject of a slightly different conversation in the early internet era. Back then, the internet was an agglomeration of private blogs, text-based chatrooms, and numerous search engines competing for dominance (anyone remember Ask Jeeves?) The priority was again to produce a digital realm of free thought and open discussion, but also to allow websites and search engines to protect users (mainly children) from harassment, obscenity, and spam. Both concerns are reflected in Section 230 of the Communications Decency Act of 1996. Unlike Net Neutrality laws, which govern ISPs’ control over internet access, Section 230 governs ICPs’ control over content that is published on the ICPs’ own servers by third parties.
The first concern is reflected in 230(c)(1), which protects ICPs from ‘publisher’ liability for content published on their site by third parties. For example, under this section, we could never prosecute Facebook for defamatory remarks posted to Facebook.com by a third-party user. Like Net Neutrality laws for ISPs, this intended to preserve the internet’s “vibrant and competitive free market,” allowing ICPs to facilitate open debate among users without fear users’ speech would expose the ICPs themselves to liability.[5]
The second concern—protecting users from obscenity and harassment—is addressed by 230(c)(2), which allows ICPs to censor “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content, regardless of whether such content is constitutionally protected.[6] The idea was to give ICPs power to preserve a ‘clean’ digital environment, safe for children as well as adults. This censorship power has been bolstered by many courts’ interpretation of the statute as allowing ICPs to define and censor “objectionable” material, as long as the ICP does so in “good faith”.[7] This allows ICPs to censor any material they object to, resulting in “subjective and unpredictable censorship of literature, art, and political discussion”.[8]
In short, because lawmakers in the 1990s were focused on the censorship threat posed by ISPs rather than ICPs, Section 230 has allowed major ICPs like Facebook and YouTube to engage in censorship without transparency or consistency.
The Internet is a Different Animal Now
As well-intentioned as Section 230 may have been, the internet has since undergone rapid growth and change, becoming a much different animal to the one it was in the 1990s. For starters, in 1996 about 20% of U.S. adults were online, and far fewer relied on the internet for news.[9] Today, the internet is the major news source for Americans[10], meaning digital platforms have much greater influence on the public discourse and knowledge base than when Section 230 was passed.
Even more significantly, the way the public actually uses the internet has changed dramatically. Net Neutrality laws conceived of the World Wide Web in its entirety as the broad ‘marketplace of ideas’. While this remains true to an extent, the digital space where debate and discussion actually occurs has become much more constrained. Gone are the atomized blogs and chatrooms of 1996—today, a small handful of social media sites are the primary forums for public debate and discussion (think Facebook, X, Tiktok).[11] Gone too are the multitude of search engines vying for supremacy—today, essentially all Americans use Google for their internet searches.[12] And, as President Trump has discovered with Truth Social, it is near impossible to change the digital landscape by introducing alternative platforms, even with massive capital and name recognition.[13] The new digital landscape has been defined and is not likely to change anytime soon.
In short, Net Neutrality laws intended to prevent a small group of private actors—ISPs—from influencing the flow of information and affecting, rather than merely facilitating, public discussion. The dominance of the internet by a handful of ICPs, and the censorship freedom allowed by Section 230, is now producing the very concerns that gave rise to Net Neutrality laws when the internet was young. Because nearly all public discussion occurs on a handful of platforms, these ICPs have become the ‘new marketplace’. Section 230 allows ICPs to dictate the rules of this new marketplace, engaging in subjective and biased moderation and censorship. It turns out the ICP is the censorious boogeyman, not the ISP as was originally predicted.
But Like, Is This Reeeeally Happening?
Unfortunately, these fears are not simply theoretical. A recent example is The Global Alliance for Responsible Media (GARM), a now-defunct NGO comprising major media agencies, media platforms, and advertisers, designed to “safeguard the potential of digital media by reducing the availability and monetization of harmful content online.”[14] Critics, including conservative lawmakers, argue GARM used advertiser boycotts to demonetize publications posting content it disfavored, disproportionately targeting conservative outlets, and pressured social media platforms to censor such content.[15] This would not be possible without Section 230’s allowance of discretionary content moderation.
Additionally, Section 230 presents governmental actors with a rare opportunity for a First Amendment workaround. Section 230 permits ICPs, as private citizens, to even censor speech protected by the First Amendment, which government actors cannot. So, if a government agency wishes to prevent public discussion or knowledge about any topic, it must simply exert pressure on the major ICPs to censor such topic. And ICPs do cave to this pressure, perhaps fearing regulatory blowback if they refuse. Indeed, Mark Zuckerberg, CEO of Meta (which owns Facebook and Instagram), recently admitted that, prior to the 2020 Presidential Election, he gave into FBI pressure to censor stories relating to Hunter Biden’s laptop[16], which contained information implicating Joe Biden in multiple corruption scandals.[17] Facebook labeled the laptop ‘Russian disinformation’ and used the platform’s algorithm to suppress the story. Twitter, under then-CEO Jack Dorsey, went a step further, completely blocking users’ ability to share or discuss any such article.[18] The real kicker? The laptop was real. The entire story was true (no Russian involvement whatsoever), highlighting the censorship ability of ICPs and the sinister repercussions of such ability. The First Amendment? Be damned.
Unfortunately, the laptop story is not a one-off. Zuckerberg has also admitted to censoring COVID ‘disinformation’, much of which has since been proven true.[19] Further, the Biden-Harris Administration was subject to lawsuits in 2022 and 2023 for pressuring social media companies to suppress content and ads from conservative outlets.[20][21] More recently, former Secretary of State John Kerry lamented the “dangers” of social media sites that prioritize freedom of speech, accusing them of inciting “anguish” and threatening democracy.[22]
Perhaps this explains the tremendous anger directed by leftist politicians and the establishment media at Elon Musk following his purchase of the platform X (formerly Twitter). Since the takeover, both foreign and American officials have condemned Musk’s actions and threatened Musk personally, in attempts to undo his transformation of the social media site.[23][24] What has Musk done, to warrant this severe reaction? He just kinda stopped censoring everyone.[25] Since the takeover, formerly banned users like President Trump and Alex Jones have returned to the platform and free speech now abounds. Which means no more First Amendment workaround, where X is concerned. The anger is starting to make sense now….
Potential Solutions
The question is: how can the dual goals of Section 230 be preserved, while still precluding inconsistent, and politically motivated censorship by ICPs? The first step is to set limits on what content can be censored in the first place, and to reject the subjective standard we have been applying. To do this, we must accept the internet is now the dominant discussion forum, where essentially all discussion and debate occurs in the country. We cannot afford to allow subjective censorship here, any more than we can allow the government to censor speech on the public street corner. We must thus remove the catch-all term “otherwise objectionable” so the only types of censorable speech are the categories explicitly listed by the statute (“obscene, lewd, lascivious, filthy, excessively violent, harassing”). This would preserve both original intentions of Section 230—allowing free conversation on important issues and keeping the internet child-friendly—while limiting ICPs’ abilities to censor content simply by deeming it “objectionable.”
Next, we must ensure ICPs apply the standard in a neutral manner. For example, it would still be an issue if Facebook began censoring violent speech uttered by Hamas supporters, but gave far-right hate groups free reign to harass and threaten users. Section 230 must thus define the ‘good faith’ standard to only allow censorship in an ideologically even-handed manner, with a reasonable explanation. Requiring even-handed, apolitical bases for censorship and a “reasonable explanation” would force ICPs to embrace greater transparency and user-accountability, rather than remaining free to hide behind the blanket-immunity Section 230 currently provides.
The consequence of continued noncompliance and breaches of good faith would be loss of immunity under Section 230 and re-classification as a “publisher”, exposing noncompliant ICPs to publisher liability for third-party content until compliance is restored and sustained. This remedy is logical recourse because, by overstepping or selectively applying the censorship standard, ICPs would be acting not as neutral “platforms” or conduits, but more like publishers, filtering content for a specific political or ideological purpose. These changes would cure abuses of Section 230 while preserving its twin aims, ensuring ICPs and ISPs are held to consistent standards, which is fitting given their comparable levels of informational control.
The Liberator @libsarenuts said:
"This is very good. My only objection is the challenge of defining "good faith" and having it apply to "bad actors", a category which virtually all ICPs fit into quite comfortably. I would want an unambiguous definition of "good faith" coupled with clear and unambiguous penalties for breach of such standards. Otherwise, the bad actors will just thumb their noses at the good actors in society. In other words, I don't trust one single social media provider to ever play fair. Ever."
🙃🙃🙃🤗🤗🤗😘😘😘😍😍😍🥰🥰🥰