Technology

Meta’s censorship of Palestine content is ‘systemic,’ Human Rights Watch finds

A Palestinian protester waves a Palestinian flag during a demonstration in the village of Ras Karkar west of Ramallah.

Meta has engaged in “systemic online censorship” and its “policies and practices have been silencing voices in support of Palestine” amid the war in Gaza, according to a new report by Human Rights Watch.

In the 51-page study published Wednesday, the human rights organization concluded the tech giant’s content moderation policies have “censored or otherwise unduly suppressed” over 1,000 counts of “peaceful content” on Instagram and Facebook.

“Meta’s policies and practices have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened censorship of social media amid the hostilities between Israeli forces and Palestinian armed groups that began on October 7, 2023,” reads the report.

HRW has called the censorship of this content “systemic and global,” acknowledging that while “Meta allows a significant amount of pro-Palestinian expression and denunciations of Israeli government policies” on its platforms, this does not take away from “undue restrictions on peaceful content” that have been well-documented since the start of the conflict in Gaza in October.

A Meta spokesperson responded to Mashable’s request for comment on the report, saying “the implication that we deliberately and systemically suppress a particular voice is false.”

In the study, HRW identified six patterns of “undue censorship,” falling under distinctive categories:

  • The removal of posts, stories and comments

  • Suspension or permanent disabling of accounts

  • Restrictions on the ability to engage with content—such as liking, commenting, sharing, and reposting on stories—for a specific period, ranging from 24 hours to three months

  • Restrictions on the ability to follow or tag other accounts

  • Restrictions on the use of certain features, such as Instagram/Facebook Live, monetization, and recommendation of accounts to non-followers

  • “Shadow banning,” the significant decrease in the visibility of an individual’s posts, stories, or account, without notification, due to a reduction in the distribution or reach of content or disabling of searches for accounts

HRW identified these patterns after reviewing 1,050 cases across 60 countries of “peaceful content in support of Palestine that was censored or otherwise unduly suppressed”, according to the report. The study also integrated research from international organizations including 7amleh, the Arab Center for the Advancement of Social Media, and Access Now.

Meta disputes report’s findings

A Meta spokesperson responded to Mashable concerning the report, calling it “misleading.”

“This report ignores the realities of enforcing our policies globally during a fast-moving, highly polarized and intense conflict, which has led to an increase in content being reported to us. Our policies are designed to give everyone a voice while at the same time keeping our platforms safe,” read the statement.

“We readily acknowledge we make errors that can be frustrating for people, but the implication that we deliberately and systemically suppress a particular voice is false. Claiming that 1,000 examples – out of the enormous amount of content posted about the conflict – are proof of ‘systemic censorship’ may make for a good headline, but that doesn’t make the claim any less misleading.”

While it’s a staggering task to calculate the total number of posts about the war in Gaza on social media platforms, for context, X has declared it had over 50 million related posts in one weekend.

The spokesperson also said that Meta is the only company “in the world to have publicly released human rights due diligence on Israel Palestine related issues.”

“We released that due diligence publicly in 2022, and also published an update in September 2023,” said the statement.

In its report, HRW pinpointed Meta’s Dangerous Organizations and Individuals (DOI) policy, which bars organizations and individuals that tout “a violent mission”, as one of the fundamental issues in these cases of censorship. The policy, according to HRW, “quells the discussion around Israel and Palestine” and has been used in some cases to “erroneously flag protected expression.”

Meta referred to its plans to review the company’s DOI policy, which was in the company’s September update.

“The HRW report ignores this 2023 September update to the human rights due diligence, in which we made clear that we were aiming to update our policies that are relevant to the praise or glorification of violent acts, including our Dangerous Organizations and Individuals policy, in H1 2024,” the company’s spokesperson told Mashable.

Meta’s recent actions face intense scrutiny, complaints

Past studies have found Meta has a history of reportedly suppressing and/or censoring discussion of issues related to Palestine and Israel on its platforms. Since the attack on Israel orchestrated by Hamas on Oct. 7, and Israel’s subsequent bombardment and besiegement of the Gaza Strip resulting in over 20,000 civilian casualties, HRW says that Meta has “increasingly silenced voices” posting in solidarity with Palestine on its platforms.

“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, HRW’s acting associate technology and human rights director, in a statement.

“Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”

In many cases, as Mashable has reported, users have claimed their posts promoting awareness about the situation in Gaza have been taken down or shadow-banned on Instagram and Facebook. A Pro-Palestinian Instagram account, known for posting on-ground information from Gaza, was confirmed to be locked by Meta for “security reasons”; in another instance, bios on the app which featured the Palestinian flag were automatically mistranslated to read “Palestinian terrorists are fighting for their freedom.” Meta apologized for the latter issue and fixed it, but did not explain why it happened. Meta-owned WhatsApp also came under fire in November for reports of AI-generated stickers of Palestinians that presented children holding guns.

In its report, HRW also criticised Meta’s policies as “inconsistent and erroneous,” and declared the company’s heavy reliance on automated tools for content moderation a major contribution to the studied cases of censorship. In other cases, the report found, “many users recorded evidence of anti-Palestinian and Islamophobic content that remained online even after they reported it to Instagram and Facebook, in the same post where the users’ initial comment was removed.”

“Instead of tired apologies and empty promises, Meta should demonstrate that it is serious about addressing Palestine-related censorship once and for all by taking concrete steps toward transparency and remediation,” said Brown.

To meet human rights due diligence responsibilities, HRW is calling on Meta to improve transparency, consistently, and to ensure decisions to censor or remove content are not sweeping or biased. The organization also said that Meta should improve transparency around government requests to remove or restrict content, such as what HRW dubbed “aggressive” content removal requests from Israel’s government and its Cyber Unit to social media companies.

Elsewhere, other platforms like X (formally Twitter) and AI platforms such as ChatGPT and Google’s Bard, have been accused of disinformation and suppression as the crisis in Gaza has reached new heights.

Across the internet — and oftentimes in response to digital suppression — people have taken to expressing solidarity with Palestine in whichever way they can. This includes partaking in digital rallies, and the use of watermelon emojis 🍉 and TikTok filters aimed at fundraising.

Mashable