The right to free speech is often justified by the idea that an undisturbed marketplace of ideas is an essential ingredient for a healthy democracy. While in many cases we may believe the views espoused by that speech are incorrect, ignorant, or even harmful, those reasons do not justify silencing those views. In 2024, there is a clear social divide between social media platforms’ content-moderation practices. On one side, anti-moderation advocates opine that social media platforms have a distinct and pervasive bias in moderating user content and viewpoints indiscriminately. On the other side, many advocates contend that social media platforms are not sufficiently engaging in content-moderation efforts, contributing to the proliferation of hate speech, misinformation, and hazardous speech, which undermine democracy. As such, there are two drastically different positions: groups promoting more content moderation and suppression of speech online in contrast to others advocating that social media platforms should uphold free speech values by taking a more restrained approach in their content-moderation practices. “In the end, both sides blame large social media [platforms], but offer little in terms of bipartisan consensus on how to move forward.” Whether allowing social media platforms to moderate content as they please or forcing social media platforms to protect the free flow of all speech and restricting their moderation activities is the right solution is debatable. This article does not take a stance on which position is correct but instead focuses on analyzing recent governmental attempts to regulate social media platforms’ content moderation practices. During the 2023-2024 term, the Supreme Court will hear two cases with the capacity to transform how social media platforms moderate content, and the structure of the internet as we know it. As this article will demonstrate, the First Amendment generally prohibits the government from passing laws that compel or restrict social media platforms from moderating content in certain ways. Yet, in certain scenarios, the government may be able to legitimately impose ‘must-carry’ obligations on social media platforms to host the content of political candidates or journalistic enterprises. To that end, the government is also generally prohibited from coercing or threatening social media platforms to moderate content in certain ways, or on certain topics. While these topics are complex (and this article seeks only to scratch the surface), this article provides a baseline on when, and how, the government can influence social media platform content-moderation practices.

Included in

Privacy Law Commons