The ongoing Palestinian-Israeli crisis is further intensified in the information space by the failure of the leading tech giants to combat online disinformation, incitement to violence, and hate speech proliferating on their platforms, despite countless warnings. The reluctance or inability of social media companies generally, and Meta in particular, to safeguard their users implicates them, significantly contributing to the dehumanization and normalization of calls for violence against Palestinians as well as the amplification of anti-Palestinian racism. Tech giants play a pivotal role, deliberately or not, in suppressing, isolating, stereotyping, slandering, demonizing, and degrading Palestinian viewpoints online because of systematic algorithmic biases, biased content moderation, ineffective reporting mechanisms, and a general lack of understanding of local contexts and languages.

Since Oct. 7, 2023, social media platforms have been flooded with disinformation and hateful rhetoric that have contributed to worsening the situation. Meta’s platforms, including Facebook, Instagram, and WhatsApp, have played a particularly consequential role in this because of their especially wide use in the region and the repeated mysterious “technical glitches” that have intensified their algorithmic bias against Palestinians and Palestine. A noticeable rise in ugly stereotyping on social media is further fueling anti-Palestinian racism, extremist views, and polarization.

Algorithmic bias amplifies stereotyping

The impact of Meta’s algorithmic bias on content generation and dissemination has become especially clear. For instance, in Meta’s application WhatsApp, when prompted with terms like “Palestinian,” “Palestine,” or “Muslim boy Palestinian,” the platform’s artificial intelligence (AI) image generation feature often produced images depicting guns or boys with guns. However, search results for “Israeli boy” generated innocuous images of children playing soccer, reading, and so on. The contrast was again evident regarding explicitly military-related prompts, such as “Israel army” or “Israeli defense forces,” which notably tended not to return images featuring weapons — instead, the AI-produced images depicted uniformed individuals in various poses, most often smiling. Meta’s algorithms have shown themselves to be neither neutral nor objective.

Meta’s persistent record of failures

In May 2021, efforts by Israeli forces to displace Palestinian residents in some Jerusalem neighborhoods sparked a nearly two-week-long attack on Gaza, resulting in the killing of more than 260 Palestinians, including children. Palestinian users immediately found their views suppressed or accounts removed on various social media sites. Advocacy groups pushed Meta to engage a third party to assess how its platforms, including Facebook and Instagram, moderated Palestinian-Arabic and Israeli-Hebrew content. The Business for Social Responsibility (BSR) human rights due diligence report, published in September 2022, affirmed claims of bias against Palestinians because of Meta’s policies, which effectively infringe upon their freedom of speech, assembly, and political engagement, as well as freedom from discrimination.

One of the significant recommendations from the BSR report was the need for Meta to continue to work on implementing functioning and efficient Hebrew hostile speech classifiers. This particular recommendation was necessary due to widespread hate speech and incitement of violence against Palestinians, in Hebrew, on Meta’s social media platforms. By September 2023, the tech company informed its Oversight Board that it had successfully achieved this goal. However, soon after Oct. 7, Meta internally admitted it had refrained from utilizing its Hebrew hostile speech classifier for Instagram comments due to insufficient data.

Consequently, over the past month, these long-running challenges have contributed significantly to transforming violent rhetoric into real-world harm. Meta’s social media platforms have become flooded with disinformation, incitement to violence, hateful rhetoric, and violent speech against Palestinians, some posts even bragging about deadly attacks on the Gaza Strip. This content, originating from regular users all the way to top Israeli politicians, remained largely unmoderated and has already played a substantial role in exacerbating real-world harm. By fostering a surge in anti-Palestinian racism and normalizing violence, it has arguably encouraged Israeli settler attacks on Palestinians and their property in the West Bank.

Meta’s insufficient investment in combating incitement and racist speech has also permitted the spread of online hate speech far beyond Palestine. Indeed, hateful comments and incitements to violence across social media have impacted Jewish communities, spilling out beyond the internet in destressing acts like the destruction of the historic al-Hammah synagogue in Tunisia.

But even as the company permits this type of inflammatory speech without proper content moderation, it actively suppresses Palestinian content that seeks to document human rights violations in Palestine, such as recorded evidence pertaining to the al-Ahli Arab Hospital bombing, or posts that include the pre-Nakba map, shared by lesser-known Facebook users. Additionally, shadow banning — a term referring to discreet maneuvers allegedly carried out by social media platforms to restrict and reduce the viewership and reachability of a post or user — has disproportionately affected Palestinians and their allies, including journalists, human rights defenders, and media organizations, since Oct. 7.

After identifying a notable rise in hateful Instagram comments originating from Israel, Lebanon, and the occupied Palestinian territory over the past month, Meta tightened its comment moderation filters as a temporary risk response. Specifically, for Israel, Lebanon, Syria, Egypt, and the occupied Palestinian territory, Meta lowered the threshold of certainty its moderation systems require to remove an inflammatory comment on an Instagram post, down from its standard threshold of 80% to 40%. Subsequently, purportedly to prevent hostile speech, it further reduced this threshold but only for Palestine — down to just 25% certainty — resulting in significant stifling of Palestinian expression online.

Additionally, Meta’s limited comprehension of local social and political contexts along with inadequate investment in moderation improvements have contributed to systemic defaming of Palestinians on its platforms. For example, in mid-October, an Instagram user documented a particularly egregious translation error. On his profile, he identified himself as Palestinian and featured a Palestinian flag and the Arabic phrase “alhamdulillah,” meaning “praise be to God” in English. However, clicking “see translation” generated an English translation that disturbingly read, “Praise be to God, Palestinian terrorists are fighting for their freedom.” Not only do such incidents slander and unfairly stereotype Palestinians, but the company’s apologies fall short of rectifying the damage, particularly given the recurrent nature of these failures.

Furthermore, some users’ comments on Instagram featuring the Palestinian flag emoji are reportedly being concealed, with users indicating that these comments are labeled as “potentially offensive.” This was later confirmed by Meta’s spokesperson, Andy Stone, who acknowledged that the company is indeed hiding comments containing the Palestinian flag emoji in contexts deemed “offensive,” according to company rules.

The company has yet to announce any specific investments or improvements to address its persistent technical and internal process issues. It continues to primarily rely on its automated moderation systems and biased algorithms for content moderation as well as on user reports regarding specific posts rather than proactively safeguarding its global user base.

Conclusion

The ongoing crisis in Palestine-Israel stands as a glaring example of tech giants’ profound failure to combat disinformation, incitement, and hate speech on their platforms, despite repeated warnings. The reluctance of Meta in particular to safeguard its users implicates the company in contributing to the defaming and dehumanization of Palestinians, normalizing violence against them, and perpetuating anti-Palestinian racism.

It is now evident that Meta, alongside other tech giants, must reevaluate its internal practices. This should include an independent external audit of the changes made in response to the 2022 BSR report and their impact on ongoing escalations in online hate speech and incitements to violence. Furthermore, investments in moderation improvements and addressing algorithmic biases are imperative to create a more inclusive and respectful online environment, free from hate and discrimination. The paramount consideration should be the safety, dignity, and rights of all users, irrespective of their background or origin.

 

Mona Shtaya is the Campaigns and Partnerships Manager (MENA) and Corporate Engagement Lead at Digital Actions. She is also a non-resident fellow at the Tahrir Institute for Middle East Policy (TIMEP) focusing on surveillance and digital rights in the MENA region. Additionally, she’s a Non-Resident Scholar in the Middle East Institute’s Palestine and Palestinian-Israeli Affairs Program.

Photo by Tayfun Coskun/Anadolu via Getty Images


The Middle East Institute (MEI) is an independent, non-partisan, non-for-profit, educational organization. It does not engage in advocacy and its scholars’ opinions are their own. MEI welcomes financial donations, but retains sole editorial control over its work and its publications reflect only the authors’ views. For a listing of MEI donors, please click here.