These days, social media platforms constitute a large portion of the public sphere. By some measures, Facebook is the single largest news source for Americans. Twitter is a central platform for communication between political leaders and their constituents. Unlike the physical town square of old, however, today’s digital public sphere is controlled by large corporations. Unsurprisingly, questions have emerged over First Amendment matters in this new public sphere. Most online tech giants have shielded themselves from civil liability under Section 230 of the 1996 Communications Decency Act, which essentially separates the platform from the poster of content. Nevertheless, major platforms now engage in extensive “content moderation” to stay in keeping with the law—removing a wide array of content such as hate speech and mis- and disinformation (i.e. “fake news”).
Actors across the political spectrum worry that tech giants now possess vast and largely unchecked discretionary power in moderating speech—out of step with democratic norms and ideals. Some have called for online platforms to be classified as state actors performing a public function so they may be subjected to first amendment jurisprudence, although such an outcome seems unlikely. Others say they should be regulated like public broadcasters. Finally, some have suggested a “toggle” feature to be set by users might best reconcile content moderation with First Amendment concerns. Much like “adult” and “family” settings already in wide usage, a toggle would allow users to choose at will between moderated and unmoderated content from web platforms.
With that in mind, the European Union has shown the most willingness to tax, fine, and regulate the (almost exclusively US) tech companies over the last few years, and it appears likely that its regulatory bodies will continue to take the lead on shaping many of the rules that govern the conduct of the digital public sphere’s masters in Silicon Valley.