Conservatives often say that Big Tech companies, especially Facebook and Twitter, discriminate against conservative viewpoints. However, conservatives make two assumptions that, on closer inspection, are faulty: (1) that Big Tech’s censorship is “systematic” and (2) that this is a violation of free speech rights. Thus, they claim, government must act to vindicate the rights of conservative Americans.
These claims of bias are difficult to substantiate, partly because the algorithms the companies use to moderate cannot distinguish the speech’s viewpoint and partly because we do not have access to the data due to privacy laws. Often, removed content that conservatives say proves bias is a case of the algorithm taking a claim too literally.
The solution conservatives want—greater government intervention on behalf of free speech—is complicated by the fact that the First Amendment applies only to government action. Social media platforms would be held to the First Amendment only if the government coerced these companies.
Moreover, social media companies have some First Amendment rights to control their platforms. The conservative case needs a recognized governmental interest that might legitimately limit the platforms’ First Amendment rights.
In Buckley v. Valeo, the Supreme Court recognized that the government could limit First Amendment rights (in that case, the right to contribute to electoral campaigns) to prevent “the appearance of corruption.” The Court found that an “appearance of corruption” arising from large contributions to candidates could cause Americans to lose faith in representative democracy. Many Americans, not just Republicans and conservatives, doubt social media companies treat users’ speech fairly. Such beliefs could plausibly undermine trust in representative government.
Conservatives should propose policies that offer the least restrictive means to prevent the appearance of corruption. Many policies considered by conservatives and others impose significant costs to the First Amendment rights of social media companies and to society in general. More modest efforts focusing on transparency or other ways to foster trust in content moderation should be on the public agenda. Finally, social media itself is trying to build trust in content moderation through institutions like Meta’s Oversight Board, which show promise of protecting speech without government intervention.
Social media platforms are private companies that give users a venue for speech, and the biggest platforms provide the largest audiences for that speech. They also sell advertisements and are beholden to shareholders. These platforms “moderate content,” meaning they have policies they use to suppress some speech.1 They have a legal right to do this; as private companies, they are protected by the First Amendment and not legally subject to it. And all users agree to this practice: Before signing up for an account, users must agree to the terms of use, which include the rules regarding content and its moderation.
The two giants of social media, Facebook and Twitter, host a large share of the American political debate. According to a 2021 Pew poll, 23 percent of Americans use Twitter; 69 percent use Facebook.2 The fact that these sites host a vast share of the public debate makes their content-moderation practices enormously consequential. The debate between conservatives and libertarians (not to mention between left and right) regarding platform moderation and its possible regulation is ongoing.
In 2019, I wrote what amounted to an orthodox libertarian account of the regulation of social media content moderation.3 In it, I argued that judicial doctrine and social norms supported a presumption against government regulation of private content moderation. I concluded that critics of social media had not overcome that presumption against regulation. In the tumultuous years since, however, social media platforms have aggressively used their right to moderate content, and it’s worth revisiting my previous conclusions in light of recent events.
In April 2020, I was one of five Americans named to the 20-member Oversight Board. This Oversight Board hears appeals of content-moderation decisions on Facebook and Instagram (both of which are now part of Meta). The board also offers Meta policy advice about governing the platforms. Our appeals decisions are binding on the company; the policy advice demands a reply but not consent from Meta.
Social media companies retain a right to suppress speech on their platforms. The legitimacy of such content moderation, however, has taken a beating since the 2016 election. A recent survey indicated that a substantial majority of Americans (58 percent) thought the First Amendment should govern content moderation. Most on the left disagreed.4
It’s well-known that few conservatives believe these sites moderate content in a politically neutral way. As early as August 2020, 90 percent of Republicans had concluded that the platforms suppressed disfavored views.5 A more recent survey found that “three-fourths of Americans (75%) say they don’t trust social media companies to make fair decisions about what information is allowed to be posted on their platforms.” This group included substantial numbers of “liberals” and “strong liberals,” but “conservatives” and “strong conservatives” were most likely to distrust social media.6 That finding notwithstanding, liberals and conservatives deeply disagree about whether social media should do less or more content moderation.7
Some may find conservatives’ complaints about viewpoint discrimination to be irrational; others may find them compelling enough but not a matter of public concern. Conservatives have reason to doubt the legitimacy of content moderation, though not for the reasons they commonly offer. I begin by considering the conservative complaint that social media elites are violating their right to free speech. I then turn to a revised, more defensible complaint for the right.
In August 2017, Donald Trump, the leader of both the Republican Party and American conservatism, accused “Big Tech”—meaning Twitter, Facebook, and Google—of censoring conservative speech online.8
Sen. Josh Hawley (R-MO) recently wrote a book extending this criticism.9 The feeling is not limited to die-hard Trump supporters, however; many serious and thoughtful conservatives have become convinced that social media moderators are discriminating against conservative content.
It’s easy to see why. In 2020, amid violent unrest coinciding with a chaotic election year, Twitter limited the reach of President Trump’s tweet warning that “when the looting starts, the shooting starts”;10 Facebook flagged the post but ultimately left it on the platform. Two weeks before the election, Facebook and Twitter prevented linking to a New York Post story highly unfavorable to Joe Biden’s son Hunter. The platforms’ content moderation probably did not determine the outcome of the election, but an impartial observer might wonder if more determined and reckless platforms could have.
After the election, Facebook suspended Trump’s account for two years, and Twitter banned him indefinitely for his comments on the afternoon of January 6, 2021, as his supporters rioted in the Capitol. In the aftermath of January 6, the new conservative social media site Parler was taken down by Amazon Web Services (AWS) for violating its rules. (This coordinated removal of an app was particularly interesting, as it went far beyond social media content moderation.)
Many conservative leaders have complained about the major platforms’ suppression of their posts or the posts of others, saying such removals are baffling and must reflect malign intent on the part of moderators. The conservative complaint depends on the accuracy of two assumptions. First, it is assumed that social media engage in systematic viewpoint discrimination against conservatives. Second, this discrimination violates the free speech rights of conservatives. Implicitly, government should have the power to vindicate those rights by regulating social media content moderation. Both assumptions have problems.
It is difficult to make a case for conservative viewpoint discrimination. Are such claims about platform rules or their enforcement? Platforms reveal data about enforcement of their rules.11 But we know nothing about content moderation by viewpoint, because Facebook almost certainly does not, and probably cannot, categorize users by viewpoint. After all, Facebook has billions of users, and its algorithms cannot distinguish between the world “Hitler” being used to attack Trump or to call for a second Holocaust.12 And, of course, to assess viewpoint discrimination properly, we would have to have either all cases of enforcement or a valid sample of all enforcement. None of this is available, and it is not likely to become available, in part because of privacy laws. Systematic viewpoint discrimination by social media platforms is almost impossible to prove.
The conservative complaint is also weakened by selection bias. Big Tech offers a rich target and mobilizes followers, so conservative political leaders talk a lot about platform bias. Those who look to such leaders to understand a complex world will hear about examples of conservative-speech suppression, but they may not hear or read about cases in which a leftist post comes down (for good or bad reasons).13 They may not appreciate how many conservatives have huge followings on Facebook and other social media.14 Eventually, for this audience, the evidence that platforms are biased against conservatives will seem overwhelming. A similar selection effect has arguably shaped opinions about climate change, hurricanes, and elected officials on the left. Mobilizing voters for political purposes is not necessarily good or bad, but the selection bias that may result can distort reality testing. Conservatives may be seeing only part of the picture. (The same is true of the left, too, of course; they just see a different part of the whole.)
It is also hard to appreciate that the examples we do see are often errors rather than intentional suppression of speech. Facebook used to police their platform by relying on flagging by users, in addition to manual and rapid review by content moderators. Since the onset of the pandemic, however, the company has relied more and more on algorithms to detect violations of community standards.15 Now more than 90 percent of takedowns are done by machines in almost all categories of infractions.16 Machine learning is impressive and essential to a platform with two billion daily users, but it can be strikingly literal in its interpretations of language and symbols and usually fails to appreciate satire or hyperbole.
Algorithmic moderation poses complicated issues. Facebook might wrongly identify and take down some speech in applying its rules. For example, Facebook has a rule against posting images of National Socialist leaders. Many users, however, post such images to criticize current political leaders, which should surely be protected speech. Facebook may be willing to tolerate the costs of such errors to make sure all genuine National Socialist imagery comes down.
On the other hand, a former Facebook employee told me that the company seeks a 95 percent probability of identifying “hate speech,” as defined in the Facebook user agreement. This high standard means the algorithms make fewer errors removing acceptable posts at the cost of leaving up some speech that violates Facebook’s rules. In other words, the company is willing to tolerate some hate speech on the platform to avoid suppressing speech by mistake. Such choices represent inevitable trade-offs in filtering large datasets. They are not in themselves, however, choices for or against the political left or right.17
Now we turn to the question of free speech violations. Only the government can violate the First Amendment; Facebook is a privately owned and managed business. It may manage speech in ways that the United States government may not. Platform owners’ right to suppress online speech depends on such actions being truly private, so one potential means of addressing the conservative complaint could rest on showing that these platforms are actually state actors. If public officials coerce platforms’ suppression of speech, content moderation begins to look like state action and thus becomes a violation of the First Amendment.
Private actions may become public if the two are “excessively entangled” in a constitutional sense. Such entanglement occurs, according to Nadine Strossen’s succinct description,
when there is sufficient cooperation or interrelationship somehow between the government and the private sector entity, either they are conspiring together, or the government is pressuring, in effect, coercing, even if not literally coercing as a practical matter, putting so much pressure on the private sector entity that it is, in fact, in effect, carrying out government orders.18
The case of “disinformation” about vaccinations serves to illustrate the possibility of such “excessive entanglement.” About one-third of citizens seem willing to forgo the benefits of a vaccination against COVID-19.19 Some people post reasons to avoid vaccinations on social media platforms; public officials and others consider these reasons to be “disinformation” and a threat to others or to public health generally. Public officials say the platforms’ unwillingness or inability to remove “disinformation” implicates them in needless deaths; the officials “suggest” the platforms aggressively suppress such “disinformation.” Meanwhile, the same administration suggesting aggressive suppression of such content is actively pursuing antitrust actions against the platforms, while members of Congress introduce legislation to deprive platforms of protections against tort liability, partly in response to alleged failures to deal with “disinformation.”20
Did the government act through social media in this case? President Biden will almost certainly not order Facebook to take down content, but Facebook’s manager may conclude that taking down content the president wishes to censor makes sense given Congress’s antitrust efforts. Is Facebook being compelled by government to suppress speech? Making the case that public and private are “excessively entangled” would not be easy and would almost certainly not resolve the general conservative complaint. The courts responsible for deciding whether government coerced platforms would have to deal with particular cases, not the kind of systematic concerns that conservatives have. Thus, for instance, if the threats regarding COVID-19 “disinformation” were deemed government coercion of speech, the corrupt companies (and elected officials or other government employees) could still affect many other issues.
The platforms might be state actors in another, more intuitive way: The largest platforms seem to be a new public square where the marketplace of ideas can be found. Turning this intuition into law and policy, however, faces several challenges, beginning with constitutional law. The platforms might become governmental actors (and thus limited by the First Amendment) if they perform a function that has “traditionally and exclusively been performed by the government.”21 Just as governments provided a public forum through parks, sidewalks, and streets, the platforms create the modern public forum online. The courts have not been friendly to this argument, as government has obviously not performed this function “exclusively.” Many private entities, some involving millions of users or viewers, have served as public forums.22 Serving a public function does not make a platform a state actor and therefore subject to the First Amendment.
In sum, the usual conservative complaint against social media comes up short on evidence and on law. We likely lack conclusive proof that social media companies practice systematic viewpoint discrimination. In any case, such discrimination may be within their rights as a private business. Yet conservatives are onto something, and that something points toward an improved complaint against social media.
A better complaint poses and answers three questions. Can social media platforms practice viewpoint discrimination in their content moderation? Could a reasonable person believe a platform does in fact practice such discrimination? Finally, and most important, if the first two questions are answered in the affirmative, why does such discrimination matter for the public (as opposed to only the targets of moderation)?
Social media companies have several ways to discriminate against conservatives. They might enact community standards that are far more likely to be used against conservatives. For example, a platform might have a rule that bans posting claims that cutting taxes raises government revenues. The rule would appear to apply generally but in practice would lead only to the suppression or removal of conservative (or perhaps, nonprogressive) posts. A more likely example might be Twitter’s policy prohibiting “deadnaming of transgender individuals,” or referring to a transgender person by their original name.23 In extreme cases—following three “strikes,” or violations of the rules—a user’s account may be suspended for a period (as on Facebook) or permanently (as on Twitter). Discriminating by rulemaking is legal and public; conservatives will not like such discrimination, but they can easily determine if it exists simply by reading the rules.
The platforms might restrict conservative voices in more covert ways.24 They can limit a post’s spread by refusing to allow other users to link to it. Short of that, they can introduce “friction” for a post in various ways that limit its audience.25 They can also explicitly restrict a post’s audience. Platforms likely have other ways of limiting viewers that are unknown to outsiders. But assuming a post is normally distributed broadly, such restrictions, though short of suppression, should be acknowledged as restricting speech. (Other interventions do not count as restrictions. For example, labeling a post but permitting access to it or pointing users to other information about a topic may well limit the influence or spread of some speech, but they are not clearly restrictions since the original speech still appears on the platform.) All such moderation, though short of complete suppression, may also discriminate by political viewpoint.
So social media companies do have many ways to affect content, few of which are transparent. But why might a reasonable person believe such power would be used to discriminate against conservatives?
The larger problem for conservatives is in the social media workforce. The biggest American social media companies are based in a part of the country where three in four voters went for Biden in 2020.26 In the 2018 midterm and 2020 presidential elections, employees at the major tech companies donated overwhelmingly to Democratic campaigns.27 The events surrounding George Floyd’s death indicated that employees can effectively protest if not always change company content decisions.28 It is hard to believe that such “many hands, one mind” among employees does not affect content moderation.29
Influences outside the companies also lean left. Stakeholder groups such as Color of Change and Public Knowledge are well-known to the staff at social media companies; conservative groups are noticeable in their absence. Peter Thiel’s view that “Silicon Valley is a one-party state” with only one side of national politics represented seems plausible.30 Bay Area universities, Stanford above all, host few conservative voices and have outsize influence over policymaking at social media platforms.
Some qualifications are in order. The companies themselves act much more like traditional donors and give to both parties.31 The companies have become effective and organized in Washington, DC. Joel Kaplan, Facebook’s top advocate in Washington and an undoubted conservative, has influenced both what’s on the platform and the regulatory and political response in DC.32 Hence, we find in tech asymmetrical mobilization; the left is on the inside, the right on the outside of the companies. A political analogy seems relevant. Does Congress (the outsider) control most federal agencies? Or do agency employees and organized interests dominate policy outcomes, Congress notwithstanding?
In the past, the leftward lean of platform employees might not have mattered much, since liberals would protect “speech that they hate.”33 But the current generation of progressives, the generation that works in social media, is longer as committed to free speech.34 They may see free speech as a legal requirement limiting state action but view it as an essentially conservative position in other contexts. If so, they might believe no private individual is required to respect or even tolerate political views they believe are false or cause “real-world harm,” which for many in this cohort now includes mental or emotional discomfort. Politicians may exaggerate the risks of intolerance for private gain, but in truth the risks are not trivial.
An ideological monoculture with no commitment to free speech as a value would not seem as threatening if it were clear what content moderators were doing on a given platform, but such transparency is not likely for several reasons. Strong transparency would demand the public know who makes the trade-offs implicit in algorithmic moderation and why they choose, say, a 95 percent probability of detecting a violation. Keep in mind also that the platforms constantly match speakers and audience to enhance user engagement; they are in the business of giving (and denying) speakers an audience and providing audiences with content. Facebook employees also have the ability to restrain the reach or “virality” of specific content.35 It is not unreasonable to wonder what content moderators are doing when no one is watching. But disclosing all this information might complicate or jeopardize the business goals of a company. Conservatives should note also that such disclosures are likely to set off a political struggle in which the left is highly organized and the right is not, at least outside of DC.
All of this might seem fanciful, even paranoid, to some people, but it’s a reasonable enough concern. Consider a thought experiment: Let’s imagine the staff of the Heritage Foundation assumed control of Facebook’s content moderation, and the new governors of Facebook were conservatives who believed strongly in freedom of speech, though they were always aware of their employer’s obligations to shareholders. Accordingly, the Heritage content moderators govern the platform as it is governed now: Not everything they do is known to the public, which itself is as divided and acrimonious as ever. If the Heritage alumni wanted to influence political outcomes, a reasonable person might think they could. What would people on the left—and indeed many people in the middle—believe about Facebook’s new content moderation?36 What would most people think about content moderators with views diametrically opposed to their own?
There are also powerful systematic factors at work endangering free speech. Advertisers may prefer calm (not heated) discussions and conventional (not extreme) speech to be near their pitch to a user. Or may simply respond to the times, and the times may demand suppression of “harmful” (i.e., conservative) speech.37 While conservatives may support capitalism, advertisers—that spawn of capitalist striving—may demand the suppression of some conservative speech online. Indeed, if they demanded the suppression of any viewpoint, it would be the one that complicates their job of effectively targeting ads to users.38
Also relevant to conservatives is Facebook’s commitment to making international human-rights norms a pillar of its content moderation. Politics will almost certainly resolve the ambiguities of those norms. Here again, the left has long had numerous nongovernmental organizations devoted to advocating left-leaning interpretations of human-rights law, allowing political judgments couched in the language of human rights to become irreproachable. Conservatives have seemingly little interest in the topic, at least as framed in these terms, and have proportionally few groups dedicated to shaping human-rights norms as such. And free speech protections will certainly not originate from abroad; European nations (especially Germany) and regional institutions assign far less importance to free speech than Americans do.39
While it seems as if conservative speech is being suppressed, it also seems as if conservatives have ceded the field in Silicon Valley. Attacking from 3,000 miles away in Washington, and with only a few conservative organizations engaged in any social media research or advocacy at all, conservatives are not making a strong case. It’s clear to see why they are upset: Conservatives look around and see institutions that were supposed to be fair sources of expertise and authority—universities, prestige journalism, the federal bureaucracy, and even big business—turning into bastions of ideology increasingly closed to nonprogressive views. They fear the addition of a partisan social media will eventually yield to the complete political marginalization and failure of conservative viewpoints.
But what might be done based on such beliefs, however plausible? The companies have First Amendment rights. How might mere beliefs about speech suppression limit such rights?
A more effective complaint to lodge against the platforms is that they appear to corrupt liberal democracy. In a liberal democracy, Francis Fukuyama and coauthors note, “We expect democratic debate and politics to be pluralistic and to protect freedom of speech.”40 Electoral outcomes should reflect the choices not of the governors but of the governed—what they say and what they believe. Liberal democracy thus depends on decentralizing power over speech and opinion; centralizing control over speech, and thereby opinion, opens a possibility of corrupting liberal democracy.
Just a decade ago, social media platforms seemed to be bulwarks of liberal democracy. There were many options, so speakers unwelcome at one site could find another or start their own blog. After a decade of centralization, however, there are far fewer venues for speaking online and being heard. In the larger national and international context, the tech giants have become centers of power and influence separate from government. How separate they are or will remain is an open question that may never be resolved. But the trend toward centralization of social media is clear, and centralization matters. Facebook has 190 million users in the United States.41 Suppressing speech at Facebook, therefore, matters in a way that excluding a speaker from a conference at the Cato Institute does not.
Centralization offers clear advantages to users and shareholders. But it also means that platform leaders and employees who moderate postings have potential veto power over what is said on a site used by almost two-thirds of Americans. That veto may be fine; the leaders and content moderators may not have strong political views, or they may simply have a strong commitment to the American version of freedom of speech.42 In that case, their job is more to referee the political fight than to determine its winner.43 Or that veto may be problematic, as outlined above.
The conservative complaint may seem irrational to some; others may find it compelling, though not a public problem. It is up to conservatives to show that their fears of marginalization and suppression on social media should matter broadly and that something can be done about it.
All platforms have First Amendment editorial rights against government regulation.44 Yet the apparent political uses of the platforms do present a recognized public problem, one that government may act on (but perhaps should not).
In the early 1970s, Congress enacted comprehensive campaign-finance regulations, including limits on campaign contributions. Congress argued that such limits served several legitimate state interests, including preventing corruption and the “appearance of corruption.” In Buckley v. Valeo, the Court agreed:
Of almost equal concern as the danger of actual quid pro quo arrangements is the impact of the appearance of corruption stemming from public awareness of the opportunities for abuse inherent in a regime of large individual financial contributions. . . . Congress could legitimately conclude that the avoidance of the appearance of improper influence “is also critical . . . if confidence in the system of representative Government is not to be eroded to a disastrous extent.”45
In Buckley, the Court held that such interests and such rules justified restricting the First Amendment rights of individuals and groups. The idea of an “appearance of corruption” merits attention. The Court argued that Congress could limit contributions so that citizens would not conclude the political process was corrupt and thereafter lose faith in American institutions.
Note that the “appearance of corruption” was not corruption or bribery. Actual cases of corruption could be prosecuted under existing law.46 The “appearance” problem lay elsewhere. Americans held certain beliefs about money and politics that might be thrown into doubt by unlimited contributions, especially if they seemed to buy policy outcomes. The Court ruled Congress could preempt those doubts and bolster public confidence in government by limiting contributions. Unlimited contributions, the Court said, posed a problem of legitimacy for American government. Government could act in limited ways to shore up that legitimacy.
Americans have a right to expect their government will not distort elections, policymaking, and the formation of public opinion by censoring speech. But Buckley also found that the exercise of First Amendment rights by private individuals could threaten the legitimacy of democracy and that government could limit such exercises of rights to sustain such legitimacy. In that regard, the curation of social media platforms seems similar to campaign contributions. Both may affect elections or policymaking, and both may undermine confidence in American political institutions by potentially determining electoral outcomes. Speech suppression by small, private organizations does not threaten such legitimacy. But Facebook and Twitter host a significant share of the political debate, and a person could plausibly believe that content moderation at such a scale could affect elections, policymaking, and public opinion. Social media content moderation thus might pose an “appearance of corruption” that threatens to undermine conservative confidence in American elections and political debate.47
Congress thus has the power to regulate private exercise of First Amendment rights if such private actions threaten to undermine support for American democracy. That power extended to campaign contributions in 1974 and extends to social media curation now. But note that this power does not include a power to prohibit the relevant private activity. And the means chosen by government should relate closely to the “appearance of corruption” problem. The appearances problem is not a justification for open season on Silicon Valley elites.
One other caveat applies here.48 The Buckley court gave greater First Amendment protection to candidate campaign expenditures than to direct contributions to candidates. Campaign expenditures by a candidate enjoyed full constitutional protection, a status that invalidated spending limits in the 1974 campaign finance law. Candidate contributions received only partial protection and thus could be limited in defense of the legitimacy of democratic institutions.49
Social media content moderation enjoys some First Amendment protection. But is it more like a campaign expenditure by a candidate or one’s contribution to a candidate? If content moderation enjoys full protection, the “appearance of corruption” argument most likely goes nowhere. A court might, however, see content moderation as enjoying lesser protection for two reasons.
First, content moderation is directly analogous to Buckley’s view of contributions. The Court there gave a lesser status to contributions because they involve “speech by someone other than the contributor.”50 The same might be said of content moderation that involves speech by users. Content moderation might share the constitutional status of campaign contributions.
Second, a court might see content moderation more as a business activity than as political speech. Content moderation is certainly essential to the social media platform’s own business activity; absent such curation, the value of a platform would not be maximized for shareholders. (And for some decades, business activity has had few constitutional protections from government regulation.51) But content moderation is more than a self-focused business decision; it also implicates political viewpoints, not just those of the company but also those of its users. A court might conclude that the business interest is primary in content moderation, while political expression is only an indirect and secondary concern.
This mixture of business and politics could mean that content moderation deserves some but not full First Amendment protection. If a court so decides, the government might act in some limited ways to regulate content moderation to preserve public confidence in representative government—that is, to combat the appearance of corruption.
Of course, this lower status for content moderation would trouble First Amendment advocates. But the “appearance of corruption” standard exists and seems most apt for conservative concerns about content moderation. Still, some caveats for conservatives are in order.
Ultimately, if private activity threatens public confidence in government, regulation should increase such trust. For example, if the “appearance of corruption” argument were correct, campaign-finance laws should improve confidence in government. Yet careful scholars have concluded “there simply is no meaningful relationship between trust in state government and state campaign finance laws” in recent decades.52 Indeed, states with broad limits on contributions tend to have “higher levels of perceived corruption.”53 Government performance, not campaign-finance regulations, appears to affect trust in government the most.54 Other studies have found similar results.55
Perhaps a conservative “appearance of corruption” argument would hold up better. The problem is ultimately a widespread distrust of companies that have the power—and may or may not have the intent—to affect elections and policymaking. And that distrust in turn may breed distrust in American democracy itself. After all, if elections and policy debates are ultimately decided in Menlo Park, then Election Day and congressional debate don’t mean much. After the Hunter Biden affair,56 it may seem obvious that social media undermines presidential elections; generations of campaign-finance reformers assumed the same about contributions. But empirically, appearances may deceive, a truth that should not be forgotten in the rush to constrain social media elites.
Many experts have proposed policies to deal with the putative problems of social media. On the right, many of those proposals assume social media violates Americans’ free speech rights, an assumption open to the objections noted earlier. I have proposed a different foundation for a public response to content moderation: to contravene the “appearance of corruption.”
I examine some current proposals by that standard: Does a proposal offer an effective response to the appearance of corruption? If so, at what cost? No doubt public action might offer public benefits by precluding a loss of confidence in elections and policy debates. But given our experience with the “appearances” standard, those benefits are likely uncertain, and the costs to the First Amendment rights of social media companies should be taken seriously.
What should the government do about content moderation? Some people say “do nothing.” Others counsel nationalization of social media as “public utilities.” In this section, I examine policies along a continuum from “do nothing” to extensive interventions.
I do not consider nationalization. One glance at public trust in the federal government from 1965 to the present suggests making social media a part of government would be unlikely to foster public confidence in content moderation, elections, or public debate.
Do Nothing. I begin with a self-critique. From a pure libertarian standpoint, the ideal answer to the question of what to do about the lack of trust in content-moderation practices is “nothing.” The platforms are private property owned by shareholders who appoint agents charged with maximizing shareholder value.57 Those agents—the managers of the firm—pursue that mission by persuading users to share content on the platform, alongside which advertising can be sold. The users agree to follow the platform’s rules and in turn receive access to a network of other users through the platform’s software.
The managers suppress some speech on the platform to maximize shareholder value. The suppression or restriction of speech comports with the rules agreed to by the user upon entering. Platforms would be free to manipulate policy debate and elections as much as they deemed profitable. Indeed, laissez-faire would allow content moderators to suppress speech for political or partisan advantage even at a cost to shareholders. The shareholders can sell their shares. The company’s board can replace the managers more concerned about politics than profit. Users can go elsewhere for a less politicized experience.
Of course, it appears extremely unlikely that Silicon Valley content moderators would manipulate elections or policy debates to realize free-market policies. A laissez-faire policy regarding content moderation might become a long suicide note for free-market economics. But one suspects that outcome hardly matters to libertarians.
If a user does not wish to follow the rules set down by the company or does not like their content-moderation policies, they may exit and seek another platform through which to express their views. The alternative they choose may not be online, and they may not have access to as significant an audience, marginalizing them from public debates. But while a user may have a right to freedom from government censorship, they have no right to an audience, especially at the expense of someone else’s business.58 These companies are free to do what they want regarding content moderation. Such an absolute laissez-faire approach does not account for the possibility of an “appearance of corruption” resulting from content moderation and does not concern itself with the continued legitimacy of representative government.
This “exit argument” may or may not address the “appearance of corruption” problem. In the first place, even unhappy users may be unlikely to exit. The platforms are large and influential and offer large consumer surpluses; many users may stay on the platform despite distrusting their content moderation because the alternatives offer small audiences and little influence. In that case, the possibility of exit has minimal effects on the “appearance of corruption” at issue here.
Users who distrust a platform could leave in search of a platform whose content moderation they trust. A platform might lose enough users to elicit changes in content moderation to build more confidence in its oversight. The appearance problem would be solved if such changes fostered enough public confidence in the platforms. But absent changes in moderation, users leaving a platform would not increase public confidence in that platform. The people who leave would presumably retain their doubts about their former platform, while the beliefs of those who stay need not change. The users leaving would presumably be more confident about content moderation on their new platform, but if their old one remains dominant, the “appearance of corruption” problem remains.
The exit argument also implies there’s somewhere to go, another website or blogging provider. Indeed, there are other places for the digitally dispossessed. But consider the partisan tenor of the campaign contributions by employees at all those alternative platforms.59 Could a conservative social media emigrant find a platform with trustworthy content moderation?
The events after January 6, 2021, also raised questions about the future of alternative platforms. As mentioned above, AWS cut off the relatively new conservative app Parler from access to its users and the web more generally.60 A few days earlier, in the wake of the riot at the Capitol, both Twitter and Facebook took away the accounts of then-President Trump, the former permanently.61 It is important not to exaggerate the dangers of this seemingly coordinated suppression of conservative views. But the events of early 2021 did show a potential capacity by social media and internet-infrastructure providers to broadly suppress political dissent.
In some ways, Elon Musk’s acquisition of Twitter would have strengthened the laissez-faire case. Musk intended to liberalize Twitter’s content moderation while unleashing its economic potential. If he had accomplished that, those who exited other platforms would have had a place to go, a place where the boss supported their speech, employees notwithstanding. But Musk’s aborted acquisition also weakened the laissez-faire “appearances” case in one way. The left greeted Musk’s takeover and support for free speech with outrage. Such outrage suggests that conservatives were right about Twitter’s commitment to advancing left-wing politics through content moderation. Musk’s aborted takeover confirms conservative fears for elections and policy debates.
Transparency. The government might require the platforms to reveal their processes and standards of content moderation. Transparency is a popular idea for several reasons, not least that it resembles laissez-faire: The platforms are not required to do anything substantively different, only to reveal what they are doing. Individual users can then presumably make informed choices about whether to stay, complain, or leave. Many also believe transparency will build trust in the content-moderation process, thereby mitigating the “appearance of corruption” problem.
Congress has considered in both of the past two years the Platform Accountability and Consumer Transparency Act.62 This bill requires social media to establish and reveal standards and processes for content moderation. It also gives users a right to a public appeal against suppression, and it would require an annual Transparency Report. If no such standards or processes existed, this bill might mitigate the “appearance of corruption” through transparency.
This assumption that transparency builds trust lacks empirical support. Consider again the campaign-finance case. Disclosure of campaign contributions does not seem to have yielded more trust in American government or the campaign-finance system itself.63 Casual observation over two decades suggests to me that disclosed information about contributions is used primarily to suggest one’s opponents are corrupt. Given that, it would not be surprising if contribution disclosure actually led to increasing distrust in the policy system. Voters are much more likely to pay attention to acerbic claims about putative corruption than to the details of disclosed financial data. Transparency about content moderation might likewise prompt more distrust of social media.
Furthermore, Facebook already has public community standards and internal review that includes an Oversight Board with binding power on the company. That was not always true. The Facebook system, which includes an appeals process, has grown gradually over the past decade, and it has become much more public in the past five years.64 Users may find comprehensive data on the enforcement of platform policies broken down by type of violation.65 Despite these efforts, trust in Facebook and its content moderation has declined during that time.66 The Platform Accountability and Consumer Transparency Act is unlikely to build more trust in social media.
It’s worth considering what sort of disclosures might actually address the belief that social media companies systematically commit viewpoint discrimination in their content moderation. We might think the question of viewpoint discrimination could be settled by revealing the ideological distribution of moderation decisions; evidence of bias would be revealed as a big hump on one side of the political continuum. But such a distribution would not settle the matter. One group or another might tend to infringe more on the rules during a particular period, so such evidence would need to be specific: cases in which liberals and conservatives infringe the same rule in the same way and conservatives alone are punished. And such data are only meaningful if there are many such instances. Recall such data are also not likely to exist: A platform would have to affix ideological labels to suppressed and unregulated platform content.
The platforms do possess valuable information related to speech, however, and Congress could mandate its disclosure. The machines that do most content moderation and the humans who consider appeals from the algorithms make two familiar errors. They leave speech that violates the rules (false negatives), and they take down content that does not violate the rules (false positives). Such errors are implicit in the tasks faced by moderators, both machine and human.
Facebook has two billion daily users. At that scale, someone has to decide which errors at the margins should be tolerated. The same seems true about human moderators reviewing the machines’ work. There are many appeals and limited moderators.
Congress could mandate that platforms reveal their preferences between false positives and false negatives in applying community standards. This information might tell us something quite general about viewpoint discrimination, but it would not address directly the “appearance of corruption” noted by conservatives. And it is hardly free-market fundamentalism to think this trade-off belongs properly to managers responsible to shareholders. Or at least, the responsibility sits better there than in Congress or an executive agency, both of which respond to organized interests.
Antitrust. In the minds of many, the content moderation practiced at the largest platforms represents centralized control over online speech. More decentralized options could render the “appearance of corruption” problem moot. The government could decentralize social media and their content moderation by force through antitrust action, breaking up the platforms and offering users more choices. Many hands would curate many platforms, fostering political rather than economic benefits.
Antitrust policy has for some time been about economics, not politics. Conservatives over the past half century have approached antitrust worries about Big Tech without much concern for size. Judge Robert Bork rightly argued antitrust authorities should focus on the possible harms done to consumers by economic concentration: higher prices, less innovation, or other welfare losses. The growth of platforms, however, has not raised the prices users pay to access social media services; they are free to use.67
Platform users have enjoyed continuous innovation and ever larger networks. Major platforms have served consumer welfare, a claim bolstered by their rising share prices and numbers of users. The platforms do not have the consequences of a monopoly, whatever the political rhetoric. This remains accurate even if we assume the platforms have powerful network effects.68 The traditional economic case against them will seem weak, at least to the conservatives who still believe that success in business indicates doing a job better than rivals.69
But there is an important mismatch here. The evidence of economic harm will seem weak, but modern conservatives are not concerned much about the economic harms caused by social media. The platforms might well increase consumer welfare and, at the same time, appear to corrupt elections and public debate. A clean bill of health on consumer welfare does not imply the platforms are not a potential threat to public debate, elections, and public faith in democracy.
Antitrust actions might replace a single platform with multiple options. Some of the new platforms would presumably have rules friendly (or at least not hostile) to conservative users. This change might also weaken or end the conservative perception of corruption of public debate.70 But it may not. The “appearance of corruption” may be fostered by a platform’s desire to maximize shareholder value, and the new platforms will presumably face the same market discipline as the old ones and thus remove speech that runs counter to maximizing value. And the new entrants may draw primarily from the same labor pool that dominates current social media. It is hard to say what breaking up the platforms might do for public attitudes about speech and elections. It is certain that a breakup would impose significant costs on successful businesses.
Consider also the costs of this revised antitrust policy game. Facing the possibility of being broken up, successful online companies that host speech and public debate will want to avoid such a fate and will be willing to compromise to avoid it—leading to a public-choice trap: If social media companies think they can avoid being the threat of dissolution by doing what elected officials want, politicians may seek to “persuade” platforms to moderate some speech to their own advantage. The antitrust solution thus generates a risk of “excessive entanglement” of business and government. As noted earlier, such entanglements are constitutionally prohibited to protect freedom of speech from politicians who have little reason to tolerate criticism.
Short of breaking up the social media companies, there are basically three ways to address the “appearance of corruption” problem: a regulatory approach, an individualistic approach, and a self-regulatory approach.
Common Carriage. The courts are unlikely to say social media has violated the First Amendment. But the legislature might have other ways to regulate content moderation to keep speech that might otherwise be excluded on the platform. Such regulation might define the platforms as “common carriers” or “public accommodations.” Both regulatory approaches have been used before—the former concerning transportation and the latter racial segregation—and both represent limits on business owners’ right to control their property. Both hold promise of producing First Amendment outcomes online without judicial intervention.
Would such policies deal with the “appearance of corruption” problem? The logic here seems compelling. If companies cannot take down speech, they cannot engage in viewpoint discrimination. If they cannot discriminate, no one may assume the companies are manipulating elections or public opinion. Given that, no one should lose confidence in elections or public opinion.
Some analysts have turned to the long history of microeconomic regulation for a way to prevent social media from suppressing speech. When a business is thought to be “affected with the public interest” and consumers have no alternative to it, regulations require such companies to do business with all customers and “to charge fair, reasonable, and nondiscriminatory rates.”71 Some argue that the dominance of the platforms means they are “common carriers” for speech and thus should be required to offer their services to all, including users who break otherwise-valid rules created in pursuit of profit, not to mention rules presumably seeking political advantage.
Consider the costs to the company of this policy. Apart from illegal speech, the platforms would be required to be neutral about expression on their platform. That means they would be required to carry spam and pornography, both of which would considerably reduce the value of the companies to their shareholders. One might also wonder about how a company’s brand might be degraded if it were required to carry legal but repulsive speech. The specific examples of prohibited speech in, say, Facebook’s Community Standards—under the heading “Do Not Post”—are not what anyone would want to sell their products next to.72
Note that common carriage presumably would go well beyond the earlier effort to deal with the “appearance of corruption.” Congress limited but did not outlaw the individual right to contribute to campaigns, attempting to strike a balance between that right and public confidence in government. A strong version of common carriage would substitute policy for editorial judgment by the platforms; a similarly strong response by a post-Watergate Congress would have eliminated private contributions to campaigns entirely. The “appearance of corruption” rationale used in the case of campaign finance was limited in another way: It applied to one type of contribution, not to all spending on campaigns. Common carriage in the case of social media seems to trump all content moderation and associated constitutional rights. It seems odd to suggest that common carriage lacks the sense of proportionality shown by congressional regulation of campaign finance, but it does.
Public Accommodation. Another regulatory approach treats the platforms as public accommodations. Title II of the Civil Rights Act of 1964 bans discrimination in places of public accommodation, including, as it turned out, the Heart of Atlanta Motel and Ollie’s Barbecue in Birmingham, Alabama. The title sets out many examples of public accommodation: motels, inns, and restaurants, among others.73 All are required to provide goods and services “without discrimination or segregation on the ground of race, color, religion, or national origin.”74
Of course, the act does not apply to platforms, which are not restaurants or hotels, and in any case, the law provides no protection against viewpoint discrimination. But that is not the claim. Rather, Randy Barnett argues:
Just as restaurants and hotels are public accommodations reached via government-owned highways, social media platforms can be considered public accommodations that are accessed via the internet. . . . No one is compelled to create a public forum for the expression of speech. It is to their credit that privately owned companies like Facebook and Twitter have successfully created a communications platform that, because it is so user-friendly, has come to be as essential a means of exercising the fundamental privileges of freedom of speech as privately owned restaurants and hotels are to the privilege of traveling. By so doing, they have become public accommodations akin to restaurants and hotels. They are . . . nongovernmental public institutions. And such institutions are typically regulated by the states.75
States would need only prohibit viewpoint discrimination online as they have racial discrimination, and matters would be simplified. Spam would not be protected under such a regime; hate-speech rules could theoretically be enforced to protect all groups. Moreover, leading scholar of free speech Eugene Volokh indicates that imposing such limits on companies might well be constitutional.76
There would be costs to banning viewpoint discrimination, however. Take, for instance, Facebook’s Community Standards and the platform’s policies about dangerous individuals and organizations (DIO).77 Facebook proscribes individuals or organizations that “proclaim a violent mission” or “entities that engage in serious offline harms—including . . . advocating for violence against civilians.”78 It also removes “praise, substantive support, and representation of [terrorist, hate, and criminal organizations] as well as their leaders, founders, or prominent members.” Also banned is content that “praises, substantively supports or represents ideologies that promote hate, such as nazism and white supremacy.”79 Note that a connection to violence is not essential here: Advocacy of such ideologies is banned whether or not violence follows, and violence or a history of violence is not necessary for prohibiting speech. Individuals and organizations may, however, “report on, condemn, or neutrally discuss [dangerous organizations and individuals] or their activities.”80 Users must take care, then, to make clear which viewpoint they are expressing. Absent a clear statement of intent, any comment about a dangerous person or individual will be removed.
As understood by US law, much of Facebook’s DIO policy constitutes classic viewpoint discrimination. That does not mean the policy should be treated as discriminatory. Facebook did not create the DIO Community Standard to suppress American conservative speech. These rules began as a way to deal with terrorists using the platform for propaganda and planning purposes. Absent these rules, terrorist groups throughout the world would be able to advocate and praise violent acts.81 For example, terrorist groups now banned from Facebook would be able to advocate the murder of Israelis and the destruction of the Jewish state. The costs of prohibiting this kind of discrimination would be significant. It is true that a rigorous ban on viewpoint discrimination would constrain an unknown number of content moderators from lumping Richard Spencer, Charles Murray, and Nikki Haley into the proscribed category of “white supremacy.” But would the benefit really outweigh the cost?
A ban on viewpoint discrimination would also eliminate Facebook’s rules against hate speech. Conservatives and free speech advocates rightly think “hate speech” is a vague term ripe for abuse. Without doubt, the term is ambiguous and thus a danger for speech that should be protected. The platform makes a game effort to ban all hate speech defined as “a direct attack against people” based on a list of protected characteristics. Concepts and institutions are not protected. Presumably a user could direct “violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation”82 toward integralism but not toward Adrian Vermeule. (Religious affiliation is a protected characteristic for people.)
Nonetheless, the hate-speech rule could not escape a ban on viewpoint discrimination for two reasons. First, the proscribed speech directed toward individuals expresses viewpoints. Second, the Facebook rule permits posting “hate speech” if it is used “self-referentially or in an empowering way.”83 In other words, for Facebook the difference between hate speech and “hate speech” arises from the user’s viewpoint. The Community Standard against hate speech would be illegal if Facebook could not discriminate online among viewpoints.
Eliminating Facebook’s rules might have benefits for conservatives. If one assumes (I do not) that conservative speech ipso facto constitutes “direct attacks” on people with protected characteristics, the current rule discriminates against conservatives, and its end would put a stop to that bias. The interpretation of the rule may pose more of a risk than its current wording; content moderators so inclined might see most conservative thought as a “direct attack” on people with protected characteristics. A ban on viewpoint discrimination might well preclude that possibility.
A ban on viewpoint discrimination would have costs. Virulent invective does impose costs on its targets. In the United States, those costs are deemed worth the benefits of free speech given the likelihood that hate-speech rules would be abused for political gain. And, of course, we assume that hate speech absent incitement is unlikely to cause violence. In some countries, however, such speech could be a prelude to genocide; it is hardly hyperbolic to suggest that the costs of such speech made far from Menlo Park might be measured in deaths sooner or later. Facebook is not everywhere in the world, but the platform has spread to many societies with civil conflicts arising from ethnic or religious rivalry. In such places, government officials and their allies may use Facebook to foster ethnic and religious violence. And, of course, a ban on hate-speech regulation might be expected to reduce the net worth of Facebook’s shareholders.
The emphasis on regulating social media platforms may be misplaced. The infrastructure of the internet itself may pose a more comprehensive threat to speech; to use tech jargon, the “stack” and not the “edge” may be the problem. Recall that it was not Twitter or Facebook that took Parler down in 2021, but AWS, a web-hosting service. AWS refused Parler service. No backup service had been arranged, so the site went down. AWS has a significant market share in web hosting, but it is nowhere near a monopoly. The “stack” depends on a small number of providers that, if they acted simultaneously or in coordination, could impose limits on who could start and sustain a new social media site. These services in the stack do not curate an experience for users; they are more like traditional infrastructure providers that are regulated by common carriage.
Middleware. A new type of software may offer a promising, individualized alternative to the pure laissez-faire approach. Some analysts have proposed giving users control over their social media experience through “middleware,” a term with several definitions. Fukuyama’s team at Stanford’s Cyber Policy Center, the foremost academic group proposing middleware as a response to the “appearance of corruption,” offers this definition:
Middleware is software, provided by a third party and integrated into the dominant platforms, that would curate and order the content that users see. Users would choose among competing middleware algorithms, selecting providers that reflect their interests and have earned their trust, and thereby would dilute the platforms’ editorial control over political communication.84
The group contends that middleware would dilute “the enormous control that dominant platforms have in organizing the news and opinion that consumers see.” Decisions over whether to “institute fact-checking, remove hate speech, filter misinformation, and monitor political interference” would no longer be made by the platforms’ content moderators.85 In sum, middleware would give users control over their social media communication.
This option would satisfy those looking for an individualistic solution, as it allows users to determine what is seen and heard online based on their own preferences. The individual control offered is closer to liberty understood as noninterference and thus the classical liberalism of a more libertarian approach. It would also constrain the content moderation performed by platforms. They would no longer moderate beyond removing illegal speech; most choices would be made by individual users. The prospect of pursuing a political agenda (left, right, or other) through content moderation would be harder if middleware worked as Fukuyama and his associates hope.
But how do we get from here to there? Libertarians often ask, “Why does a product that does not exist not exist?” Platforms have not created such products. They may believe such products would reduce their revenue. Perhaps middleware would not be wanted by enough social media users to justify creating and selling the product. Perhaps middleware would require platforms to behave in ways that contravene their core responsibility to their shareholders. None of this suggests markets have failed in not creating middleware products.
Fukuyama and his associates take another tack. They argue the danger posed by concentrated power over speech and elections is great enough that government should create and sustain middleware companies: “We expect that Congress would have to pass new legislation that authorizes an existing agency, or establishes a new specialized agency, to exercise the regulatory functions to foster a middleware market.”86
To their credit, Fukuyama and his associates address the considerable challenges facing middleware producers and a middleware market. For example, middleware firms would naturally need enough revenue to flourish; one might imagine increased revenues might be shared between the platforms and the new middleware firms. However, if the two could not reach an agreement on such sharing, terms “might have to be established by regulators.”87 The new agency might also mandate “the availability of platform APIs [application programming interfaces] to middleware providers, platform compliance with other conditions necessary to allow middleware providers to offer their products.” Fukuyama and his associates also discuss the significant technological challenges that would be faced by middleware companies. They conclude, “Administrators . . . will need to work with industry leaders to chart out the assorted responsibilities and prerogatives for both middleware providers and the platforms and to design the technical framework that will allow middleware offerings to thrive.”88
Many people, maybe most, sympathize with Fukuyama’s desire to protect speech from private actors. However, public agencies involved in regulation that is at once highly technical and political have not been especially attentive to freedom of speech in the past.89 Fukuyama and his associates have proposed that the government create, if need be, the conditions and revenue necessary for this new middleware sector. It seems likely that some middleware would reflect the values on display at MSNBC and others would reflect those of Fox News or other points of view. Indeed, that is a major advantage of the middleware proposal. Yet, given the money at stake and the obscure and highly technical processes involved, the new agency would attract attention from the Congress, the platforms, and organized interests. We should not assume an agency with such power would be better for free speech than the platforms; both are concentrations of power over speech, and the political struggle within an agency is not obviously better for speech than the mixture of economic and political motives that drives content moderation.
In particular, this proposal and the new agency should not be expected to appeal more to conservatives than the status quo. Both the platforms and the agency would be staffed by technocrats, a group not known for their tolerance of conservative speech. On the other hand, a more-or-less overt political struggle over the platforms might be better than other options, these doubts notwithstanding.
For classical liberals, middleware has significant problems, as noted. But it should not be dismissed out of hand given its considerable attractions; it responds to the “appearance of corruption” in an individualist way, at least in theory. Conservatives might look closely at the possibility of a market for middleware. Some minimal regulatory changes regarding privacy might be necessary for a market solution. The middleware proposal bears sustained skeptical attention. There may be no market solution to the “appearance of corruption,” but we do not know that yet.
Thus, it appears the individualistic approaches to the “appearance of corruption” are inadequate to the task. Laissez-faire essentially denies that the appearance of platform corruption should matter. Both middleware and transparency seek to serve individual ends. The former is too complicated and assumes our institutions have a capacity for public actions evident nowhere else. Transparency is unlikely to make a difference.
It might be thought that the platforms, if left alone, would do nothing to limit their own power, and any check would have to come from outside. But Facebook has attempted to limit its own power with regard to content moderation. It has set up an “Oversight Board” charged with deciding the propriety of specific moderation decisions by Facebook and with recommending policy changes related to those decisions.
On its face, this would not appear to limit Facebook’s control over public discourse. If Facebook appoints and pays the members and staff of this institution, the board might be more an agent of, than a check on, Facebook’s content policies. Anticipating this concern, Facebook set up an irrevocable trust that pays the board members and administers its operations, including hiring and firing. Facebook also set aside six years of operational funding for the board.
The board has a charter and bylaws that disclose its purposes and powers.90 These foundational documents emphasize the priority of speech among Facebook’s values and the independence of the board members passing judgment on the company’s content moderation. Such independence has its own risks. Facebook might be tempted to pass along difficult decisions to the board, thereby escaping responsibility for obviously necessary but unpopular decisions. Such an escape from responsibility would have other implications: Having a small number of people on an Oversight Board limiting speech or swaying elections might not seem much more democratic than Facebook doing it in-house. After all, Facebook’s three billion users do not elect the board members.
It may be helpful to think of this board as a court. Courts have long served as a remedy to one of the problems of democracy: how to control the power of governors and of the people. Courts in democratic countries are generally appointed rather than elected and are expected to be independent in their judgments, according to the laws agreed on by the people through their elected representatives. Likewise, Facebook managers have created abstract rules informed by specific examples, and users consent to the rules (though such agreement seems unlike the ratification of, say, the United States Constitution). Perhaps the job of Facebook’s Oversight Board is to begin creating a “common law” for social media. If that common law protects speech and elections, the board might be a reasonable response to the democracy problem.
Many people have doubts about the Oversight Board. Conservatives and others with a firm commitment to the First Amendment may have reasons for worry beyond their general distrust of people in Silicon Valley. Because the large social media platforms are truly global, they must find a foundation for content moderation that goes beyond national laws and norms and yet elicits the support of users living in many places. To that end, the Oversight Board is composed of members from 16 countries, and its charter refers to international human-rights norms as one basis for judging appeals to Facebook’s content moderation. Facebook has agreed to “respect” such norms.91
A commonly recognized standard for those norms is the United Nations’ International Covenant on Civil and Political Rights (ICCPR), which the US government ratified in 1992. For those who favor an American level of free speech, the ICCPR bodes both good and ill. The good may be found in its Article 19, which reads a lot like the US First Amendment: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”92 The UN’s Special Rapporteur has interpreted Article 19 to require something like American “strict scrutiny” of restrictions on speech by Facebook.93 With speech restrictions in the US, “strict in theory” does indeed mean “fatal in fact.”
Yet Article 19 also has some dark clouds. It states that speech may be restricted to protect the rights of others. Of course, that’s always true; John Stuart Mill himself thought liberty might be properly restricted to prevent harm to the vital interests of others.94 Unfortunately, over the years, the United Nations has created a plethora of rights, any one of which could presumably outweigh the right to free speech. Moreover, the ICCPR also includes the notorious Article 20(2) that requires governments to outlaw “hate speech.”95 The US government reserved a right to ignore Article 20(2) as incompatible with the First Amendment. Facebook and other social media companies have no such reservation.
Conservatives may worry that the Facebook board’s members will not have a strong enough commitment to freedom of speech. Americans made up only a quarter of the initial appointments (five of the 20), two of whom may be identified as some combination of classically liberal and conservative (including myself). More to the point, the United States has the most liberal free speech protections in the world, with extensive protections for political and extreme speech including abstract calls to violence and what is called “hate speech.” Other nations do not offer such protections; their citizens may also, as I was once told by a foreign national, “think Americans are crazy” to offer such protections.
And yet, Americans are not the only people who believe in free speech. Many people elsewhere have experienced more concrete censorship and tyranny, which often translates into support for free speech and other political rights. Facebook set out to find board members who favored freedom of speech.
In the end, the doubts of conservatives and others will be confirmed or refuted by actual decisions. The Oversight Board’s most well-known decision did not please some—maybe most—conservatives. The board upheld Facebook’s initial revocation of President Trump’s account in response to his posts on the afternoon of January 6, 2021. It might be missed, however, that the board also struck down Facebook’s indefinite suspension of Trump’s account because the company had imposed a penalty that did not exist before January 6. The ruling forced Facebook to determine a finite length for Trump’s suspension. Clearly, Facebook would have preferred for the board to resolve the case by declaring Trump reinstated or banned for a definite period. The board both supported and contravened Facebook in the Trump decision. Of the two, forcing accountability on the company seems by far the more important upshot of the decision.
In 2021, the Oversight Board published 20 decisions that either supported or rejected a Facebook decision. (The Trump decision was an outlier in both supporting and rejecting Facebook’s actions.) Of those 20, the board supported Facebook’s removal of content five times (including the Trump decision). It thus rejected Facebook’s removal of content 14 times. One other explication is necessary: In one other case, upholding Facebook’s decision has meant protecting speech. For example, the board upheld Facebook’s keeping up a controversial statement by a Brazilian medical group about pandemic lockdowns.96 In that case, the only one like it so far, upholding Facebook’s action implied “more speech.” In sum, 75 percent of the Oversight Board’s decisions so far (15 of 20) have favored speech over suppression by the platform.
Let’s look briefly at the exceptional cases in which the Oversight Board upheld Facebook’s suppression of speech. The Trump case involved the president praising protesters in the Capitol while they were rioting during the constitutionally mandated process for electing the president. In another case from the Netherlands, the board upheld Facebook’s enforcement of its “express prohibitions on posting caricature of Black people in the form of blackface.”97 The third case involved an ethnic slur posted in a war zone. A fourth case involved speech about the actions of a group involved in a civil war; the speech may have been warning people in another region or spreading rumors that could lead to harming members of another ethnic group. I suspect the US government could not censor any of this speech in this nation because of the First Amendment. However, I would also say none of these actions by Facebook or its board seem much of a threat to liberal democracy.
This overview of decisions prompts qualifications. There have been only 20 cases so far, which marks a good start for the Oversight Board, but after more decisions, analysts may decide the good start was misleading. Even now, the 20 cases acted on perhaps a half-million appeals. It will be difficult to affect overall content moderation by Facebook without the platform’s cooperation; it may even turn out to be difficult to say whether Facebook is cooperating by applying binding decisions more generally. However, the Oversight Board, like a court, has set out reasoned responses to specific instances of Facebook’s content moderation. Those decisions, along with Facebook’s actions recounted in each decision, are open to public comment and criticisms. None of the other platforms have gone even that far.
The largest social media platforms are not the only place in the United States to talk about and debate politics. They are, however, an important place for such debates and may become essential in the future. What is permitted on those platforms matters now and may matter a great deal soon, and many conservatives do not believe the managers of those platforms will protect conservative speech.
This conservative complaint against social media is rooted in distrust of the evident progressivism of Silicon Valley elites as indicated by the political activity of some of them. Evidence that such people act systematically on their presumed animus toward American conservatives seems unpersuasive, in part because the data that would settle the question may not exist. Indeed, the evidence given for such bias may be more a result of our cultural wars than a considered effort to test our assumptions. For some, the lack of ironclad evidence of bias settles the question.
Yet the importance of social media warrants concerns about current and future content moderation. Managers of the most significant social media platforms are indeed overwhelmingly on the left side of the political spectrum. Conservatives worry that the left’s long march through institutions will continue through the gates of Facebook and Twitter. Is that concern so absurd? Which of us, conservative or not, would trust our opponents with the potential power to push policymaking or swing elections?
Congress had the authority to limit political donation amounts to counter the appearance that large donations corrupted elections and policymaking; those limits remain despite little convincing evidence that large contributions corrupted either. Similarly, we may doubt that social media companies have used content moderation to sway elections or policy debates. But the circumstances of moderation suggest a reasonable person might believe elections and policy debates could be corrupted by moderation. Certainly most people believe such moderation is politically biased. For some, this belief raises doubts now about the legitimacy of elections. In time, those doubts may grow. As with campaign finance, government has the power to counter such appearances of corruption, including even the restriction of a First Amendment right.
That said, we should be careful and seek evidence that this “appearance of corruption” does foster doubts about democracy. Just because the government has the power to limit the rights of social media companies to combat the “appearance of corruption” does not mean such power should be exercised. We ought to keep in mind that, while contribution limits were onerous, they left some room for expression intact. Completely substituting government mandates for editorial judgment by the platforms goes well beyond earlier efforts to combat the “appearance of corruption” in our polity. Laissez-faire may turn out to be the best of a bad set of options if self-regulation efforts like Meta’s Oversight Board work reasonably well. If anything is done, it should be with a light hand and with regard to the likelihood of shoring up public confidence in our elections and public debates.
We are only beginning to struggle toward a social modus vivendi on these matters. We should remain skeptical about government solutions that offer more costs than benefits, and we must keep in mind the liberal and democratic values at issue in any proposal. The platforms themselves have begun to build institutions that offer partial solutions to the “appearance of corruption” problem. Those institutions may fail, but their outputs may be read and judged by everyone on the internet. They offer the transparent, public justification for their decisions that content moderation so often lacks. We may well conclude in a few years that, the events of 2020 notwithstanding, social media has turned out to be a good thing for conservatives and everyone else.
John Samples is a vice president of the Cato Institute and a member of the Oversight Board.
Evidence of Public-Private Collusion Complicates Online Censorship Debate
By Bret Swanson and John Samples