Policy

Human Rights: Meta’s policies silenced voices supporting palestinian rights 


Human Rights Watch revealed on Thursday that the content moderation policies and systems at Meta increasingly led to silencing voices supporting Palestinians on Instagram and Facebook, following hostile actions between Israeli forces and armed Palestinian groups.

The organization stated in a 50-page report titled “Unfulfilled Meta Promises: Systematic Content Moderation of Palestinian Content on Instagram and Facebook.”

It continued, “Meta must allow protected expression on its platforms, and reform its policies to make them human rights-compliant, fair, and non-discriminatory.” The organization pointed out that its report documents “a pattern of unjustified removal and suppression of protected speech, including peaceful expression in support of Palestinians and public discussions about Palestinian human rights.”

Human Rights Watch said, “The problem arises from flawed and inconsistently implemented descriptive policies, excessive reliance on automated content moderation tools, and unjustified governmental influence on content removal operations.”

Deborah Brown, Acting Technology Director and Director of Human Rights at Human Rights Watch, stated, “Meta‘s censorship of content supporting Palestinians adds insult to injury at a time when Palestinians are suffering indescribable horrors and already face stifled expression.”

She added, “Social media is a crucial platform for people to testify and speak out against violations, while Meta‘s censorship further erases the suffering of Palestinians.”

Human Rights Watch reviewed 1,050 cases of internet censorship in over 60 countries. Although these cases are not necessarily representative of all censorship, they align with years of reports and advocacy by Palestinian, regional, and international human rights organizations detailing Meta‘s content moderation on Palestinian content.

Human Rights identified six main patterns of censorship, stating that each was repeated in at least 100 cases: content removal, account suspension or deletion, inability to engage with content, inability to follow or label accounts, restrictions on using features such as Instagram and Facebook Live, and “shadow banning,” a term referring to a significant decrease in the visibility of someone’s posts, stories, or individual account without notice.

In hundreds of documented cases, Meta relied on the Dangerous Individuals and Organizations (DIO) policy, which includes complete lists of designated terrorist organizations by the United States. Meta cited and applied these lists comprehensively to restrict legitimate expression about hostile acts between Israel and Palestinian armed groups.

According to the organization, Meta “misapplied its policies on violent content, incitement, hate speech, nudity, and sexual activity, inconsistently applied its publishing policy, and removed dozens of parts of content documenting Palestinian injuries and deaths that had news value. Meta acknowledges that its implementation of these policies is flawed.”

Human Rights Watch had stated in a 2021 report that it “documented Facebook’s censorship of discussions on human rights issues related to Israel and Palestinians” and warned that Meta “silences many people arbitrarily and without explanation.”

Show More

Related Articles

Back to top button
Verified by MonsterInsights