Document Type

Article

Publication Date

2021

Status

Accepted

Abstract

The year 2020 was without a doubt a remarkable and unprecedented one, on many accounts and for many reasons. Among other reasons, it was a year in which the major social media platforms extensively experimented with the adoption of a variety of new tools and practices to address grave problems resulting from harmful speech on their platforms — notably, the vast amounts of misinformation associated with the COVID-19 pandemic and with the 2020 presidential election and its aftermath. By and large — consistent with First Amendment values of combatting bad speech with good speech — the platforms sought to respond to harmful online speech by resorting to different types of flagging, fact-checking, labeling, and other forms of counterspeech. Only when confronting the most egregiously harmful types of speech did the major platforms implement policies of censorship or removal — or the most extreme response of deplatforming speakers entirely. In this Article, I examine the major social media platforms’ experimentation with a variety of approaches to address the problems of political and election- related misinformation on their platforms — and the extent to which these approaches are consistent with First Amendment values. In particular, I examine what the major social media platforms have done and are doing to facilitate, develop, and enhance counterspeech mechanisms on their platforms in the context of major elections, how closely these efforts align with First Amendment values, and measures that the platforms are taking, and should be taking, to combat the problems posed by filter bubbles in the context of the microtargeting of political advertisements.

This Article begins with an overview of the marketplace of ideas theory of First Amendment jurisprudence and the crucial role played by counterspeech within that theory. I then analyze the variety of types of counterspeech on social media platforms — by users and by the platforms themselves — with a special focus on the platforms’ counterspeech policies on elections, political speech, and misinformation in political/campaign speech specifically. I examine in particular the platforms’ prioritization of labeling, fact-checking, and referring users to authoritative sources over the use of censorship, removal, and deplatforming (which the platforms tend to reserve for the most harmful speech in the political sphere and which they ultimately wielded in the extraordinary context of the speech surrounding the January 2021 insurrection). I also examine the efforts that certain platforms have taken to address issues surrounding the microtargeting of political advertising, issues which are exacerbated by the filter bubbles made possible by segmentation and fractionation of audiences in social media platforms.

GW Paper Series

2021-30

Included in

Law Commons

Share

COinS