Free Speech Social Media Doesn’t Exist – DNyuz

Free Speech Social Media Doesn’t Exist

On the day Meta’s new app, Threads, launched, CEO Mark Zuckerberg explained that it would be “an open and friendly public space for conversation.” In a not-so-subtle dig at Twitter, he argued that keeping the platform “friendly” as it expands would be crucial to its success. Within days, however, Media Matters claimed that “Nazi supporters, anti-gay extremists, and white supremacists” were “flocking to Threads,” posting “slurs and other forms of hate speech.” The group argued that Meta did not have strict enough rules, and that Instagram, the platform that Threads is tied to, has a “long history of allowing hate speech and misinformation to prosper.”

On the day Meta’s new app, Threads, launched, CEO Mark Zuckerberg explained that it would be “an open and friendly public space for conversation.” In a not-so-subtle dig at Twitter, he argued that keeping the platform “friendly” as it expands would be crucial to its success. Within days, however, Media Matters claimed that “Nazi supporters, anti-gay extremists, and white supremacists” were “flocking to Threads,” posting “slurs and other forms of hate speech.” The group argued that Meta did not have strict enough rules, and that Instagram, the platform that Threads is tied to, has a “long history of allowing hate speech and misinformation to prosper.”

Such concerns about hate speech on social media are not new. Last year, EU Commissioner for Internal Market Thierry Breton called efforts to pass the Digital Services Act “a historic step towards the end of the so-called ‘Wild West’ dominating our information space,” which he described as rife with “uncontrolled hate speech.” In January 2023, experts appointed by the United Nations Human Rights Council urged platforms to “address posts and activities that advocate hatred … in line with international standards for freedom of expression.” This panic has led to an explosion in laws that mandate platforms remove illegal or “harmful” content, including in the EU, Germany, Brazil, and India.

These concerns imply that social media is a lawless mayhem when it comes to hate speech. This is not true. Most platforms have strict rules prohibiting hate speech, which have expanded significantly over the past several years. Many of these policies go far beyond both what’s required and permissible under international human rights law (IHRL).

We know this because the Future of Free Speech project at Vanderbilt University, which I direct, published a new report analyzing the hate speech policies of eight social media platforms–Facebook, Instagram, Reddit, Snapchat, TikTok, Tumblr, Twitter, and YouTube–from their founding until March 2023

While none of these platforms are formally bound by IHRL, all except Reddit and Tumblr have committed to respect international standards by signing on to the U.N. Guiding Principles on Business and Human Rights. Moreover, in 2018, the U.N. special rapporteur on freedom of opinion and expression proposed a framework for content moderation that “puts human rights at the very centre.” Accordingly, we compared the scope of each platform’s hate speech policy to Articles 19 and 20 of the U.N.’s International Covenant on Civil and Political Rights (ICCPR).

Article 19 ensures “everyone … the right to freedom of expression,” including the rights “to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of his choice.” However, this right can be subjected to restrictions that are “provided by law and are necessary” for compelling interests, such as “respect of the rights or reputations of others.” Article 20 mandates that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” Any restrictions on freedom of expression under Articles 19 and/or 20 must satisfy strict requirements of legality, legitimacy, and necessity. These requirements are meant to protect against overly vague and broad restrictions, which can be abused to prohibit political and religious dissent, and to safeguard speech that may be deeply offensive, but doesn’t reach the threshold of incitement.

How do the platform policies on hate speech measure up? In some areas, they are aligned closely. A decade ago, more than half of the eight platforms did not have an explicit hate speech prohibition. In 2014, only 38 percent of the analyzed platforms prohibited “hate speech” or “hateful content.” By 2018, this percentage had risen to 88 percent–where it remains today. Similarly, a decade ago, only 25 percent of platforms banned incitement to or threats of violence on the basis of protected characteristics, but today, 88 percent of the platforms do. The changes are generally in line with IHRL’s prohibition of hate speech.

In some ways, the hate speech restrictions of platforms have gone beyond human rights. In 2014, no platforms banned dehumanizing language, denial or mocking of historical atrocities, harmful stereotypes, or conspiracy theories in their hate speech policies–none of which are mentioned by Article 20. By 2023, 63 percent of the platforms banned dehumanization, 50 percent banned denial or mocking of historical atrocities, 38 percent banned harmful stereotypes, and 25 percent banned conspiracy theories. It is doubtful that these prohibitions satisfy Article 19’s requirements of legality and necessity.

Many platforms’ hate speech policies also cover identity-based characteristics that are not included in Article 20. The average number of protected characteristics covered by platform policies has gone from less than five before 2011 to 13 today. Some platforms ban hate speech that targets characteristics like weight, pregnancy or age. Others prohibit it when targeting disease, veterans status, victimhood, and other major events. Under IHRL, most of these characteristics do not enjoy the same protected status as race, religion, or nationality, which have frequently been used as the basis to incite discrimination and hostility against minorities, sometimes contributing to mass atrocities.

Our research cannot identify the exact causes of this scope creep, but platforms have clearly faced mounting financial, regulatory, and reputational pressure to police additional categories of objectionable content. In 2020, more than 1,200 business and civil society groups took part in the Stop Hate for Profit boycott, which leveraged financial levers to pressure Facebook into policing more hateful content. This type of concerted pressure encourages people to adopt a “better-safe than-sorry” attitude when it comes moderation policies. The expansion in protected characteristics may reflect what University of California, Los Angeles, law professor Eugene Volokh calls “censorship envy,” where groups pressure platforms to afford them protection based on the inclusion of other groups, making it difficult for platforms to deny any without appearing biased.

Most platforms refuse to share raw data with researchers, so identifying any causal link between changes in policy scope and enforcement volume is difficult. However, studies in the United States and Denmark suggest that hate speech comprises a relatively small proportion of social media content. Numerous examples exist of policies that have caused collateral damage to political expression and dissent. In May 2021, Meta admitted that mistakes in its hate speech detection algorithms led to the inadvertent removal of millions of pro-Palestinian posts. In 2022, Facebook removed a post from a user in Latvia that cited atrocities committed by Russian soldiers in Ukraine, and quoted a poem including the words “kill the fascist,” a decision that the platform’s Oversight Board overturned partially based on IHRL.

The enforcement of hate speech policies can also lead to the erroneous removal of humor and political satire. Facebook’s own data suggests a massive drop in hate speech removals due to AI improvements that allowed it to identify posts that “could have been removed by mistake without appropriate cultural context,” such as “humorous terms of endearment used between friends.” In 2021, the U.S. columnist and humorist David Chartrand described how it took Facebook all of three minutes to remove a post of his that read “Yes, Virginia, there are Stupid Americans,” for violating its hate speech policies.

Our analysis shows that many platforms’ hate speech policies do not meet the standards of human rights they claim to uphold. So perhaps the right analogy for social media is not a lawless Wild West–but rather a place where no one knows when or how the ever-changing rules will be enforced. If so, the right path forward is not to make these rules even more complex.

Instead, platforms should consider directly tying their hate speech rules to international human rights law. The approach will create a transparent, speech-protecting environment. However, it won’t eliminate inconsistent or erroneous policy enforcement.

Alternatively, platforms could decentralize content moderation. This option would give users the ability to opt out of seeing content that is offensive to them or contrary to their values, but it would also protect expression and reduce platform power over speech. Meta seems to envisage steps in this direction by making Threads part of the so-called fediverse, meaning that it enables users to connect with users on platform protocols not controlled by Meta. Combining IHRL and decentralization is also possible. Decentralization of content moderation and curation is possible, but it must be done in a way that respects international human rights laws. All of these solutions are not perfect and will not satisfy all. But despite the very real challenges and trade-offs that they entail, they are preferable to the status quo.

The post Free Speech Social Media Doesn’t Exist appeared first on Foreign Policy.

Loading...