GW Law Faculty Publications & Other Works

Document Type

Article

Publication Date

2020

Status

Accepted

Abstract

Social media platforms today are playing an ever-expanding role in shaping the contours of today’s information ecosystem. The events of recent months have driven home this development, as the platforms have shouldered the burden and attempted to rise to the challenge of ensuring that the public is informed – and not misinformed – about matters affecting our democratic institutions in the context of our elections, as well as about matters affecting our very health and lives in the context of the pandemic. This Article examines the extensive role recently assumed by social media platforms in the marketplace of ideas in the online sphere, with an emphasis on their efforts to combat medical misinformation in the context of the COVID-19 pandemic as well as their efforts to combat false political speech in the 2020 election cycle. In the context of medical misinformation surrounding the COVID-19 pandemic, this Article analyzes the extensive measures undertaken by the major social media platforms to combat such misinformation. In the context of misinformation in the political sphere, this Article examines the distinctive problems brought about by the microtargeting of political speech and by false political ads on social media in recent years, and the measures undertaken by major social media companies to address such problems. In both contexts, this Article examines the extent to which such measures are compatible with First Amendment substantive and procedural values.

Social media platforms are essentially attempting to address today’s serious problems alone, in the absence of federal or state regulation or guidance in the United States. Despite the major problems caused by Russian interference in our 2016 elections, the U.S. has failed to enact regulations prohibiting false or misleading political advertising on social media – whether originating from foreign sources or domestic ones – because of First Amendment, legislative, and political impediments to such regulation. And the federal government has failed miserably in its efforts to combat COVID-19 or the medical misinformation that has contributed to the spread of the virus in the U.S. All of this essentially leaves us (in the United States, at least) solely in the hands, and at the mercy, of the platforms themselves, to regulate our information ecosystem (or not), as they see fit.

The dire problems brought about by medical and political misinformation online in recent months and years have ushered in a sea change in the platforms’ attitudes and approaches toward regulating content online. In recent months, for example, Twitter has evolved from being the non-interventionist “free speech wing of the free speech party” to designing and operating an immense operation for regulating speech on its platform – epitomized by its recent removal and labeling of President Donald Trump’s (and Donald Trump, Jr.’s) misleading tweets. Facebook for its part has evolved from being a notorious haven for fake news in the 2016 election cycle to standing up an extensive global network of independent fact-checkers to remove and label millions of posts on its platform – including by removing a post from President Trump’s campaign account, as well as by labeling 90 million such posts in March and April 2020, involving false or misleading medical information in the context of the pandemic. Google for its part has abandoned its hands-off approach to its search algorithm results and has committed to removing false political content in the context of the 2020 election and to serving up prominent information by trusted health authorities in response to COVID-19 related searches on its platforms.

These approaches undertaken by the major social media platforms are generally consistent with First Amendment values, both the substantive values in terms of what constitutes protected and unprotected speech, and the procedural values, in terms of process accorded to users whose speech is restricted or otherwise subject to action by the platforms. The platforms have removed speech that is likely to lead to imminent harm and have generally been more aggressive in responding to medical misinformation than political misinformation. This approach tracks First Amendment substantive values, which accord lesser protection for false and misleading claims regarding medical information than for false and misleading political claims. The platforms’ approaches generally adhere to First Amendment procedural values as well, including by specifying precise and narrow categories of what speech is prohibited, providing clear notice to speakers who violate their rules regarding speech, applying their rules consistently, and according an opportunity for affected speakers to appeal adverse decisions regarding their content.

While the major social media platforms’ intervention in the online marketplace of ideas is not without its problems and not without its critics, this Article contends that this trend is by and large a salutary development – and one that is welcomed by the vast majority of Americans and that has brought about measurable improvements in the online information ecosystem. Recent surveys and studies show that such efforts are welcomed by Americans and are moderately effective in reducing the spread of misinformation and in improving the accuracy of beliefs of members of the public. In the absence of effective regulatory measures in the United States to combat medical and political misinformation online, social media companies should be encouraged to continue to experiment with developing and deploying even more effective measures to combat such misinformation, consistent with our First Amendment substantive and procedural values.

GW Paper Series

2020-48

Included in

Law Commons

Share

COinS