Over the past decade, social media and communications platforms such as Facebook, Twitter, and WhatsApp have emerged as important spaces for civil society, journalists, and everyday people in the Middle East to express themselves and organize. However, as we noted in our first piece in this series, users’ experiences on these platforms often differ as platforms’ enforcement of their content policies varies by geography, language of use, and context. These flaws in the content moderation system can harm users residing in and around the Middle East, as well as those who use Middle Eastern languages such as Arabic. Although these disproportionate outcomes are commonly discussed, there are only a handful of widely documented and circulated examples, and the majority of evidence relies on informal anecdotes. To fill this gap, over the past several months we spoke to a range of activists, journalists, and members of civil society from the Middle East about how they interact with online content moderation systems, how these experiences have influenced their online behaviors, and what broader trends they see at play.[1]

A fundamental lack of transparency

One of the reasons for this vast gap in evidence is that internet platforms do not, for the most part, publish robust data around how their content moderation practices are enforced in the Middle East and North Africa (MENA). Currently, some social media platforms publish transparency reports outlining the number of government requests for removal of illegal content they receive per country. However, governments can also request the removal of content citing that it violates a platform’s content policies. In these instances, platforms may not categorize this as a “government request,” providing no transparency into the government entity’s role in mediating online expression.

Numerous advocates we spoke to noted that current transparency reporting practices do not adequately illuminate the full scope of cooperation and pressure between governments and companies. As anecdotes of unexplained content removals and account suspensions proliferate, transparency around these communications becomes increasingly critical.

Additionally, platforms such as Facebook, Twitter, and TikTok publish transparency reports outlining the scope and scale of content they remove for violating content policies including those on hate speech, terrorist propaganda, and graphic violence. However, this data is shared in aggregate and is not broken down by country or language, making gathering evidence of specific linguistic or cultural discrimination difficult. This is also visible in ad transparency reporting. One interviewee outlined how Facebook’s ad library recently expanded to include ads run in almost every country it operates in. But while Facebook provides an in-depth ad transparency report for some countries (such as the United States), for many countries in the Middle East users can only perform keyword searches of the ad library using an API. This means that users have fewer transparency features at their disposal, and generally must know what they are looking for before beginning their search.

As we noted in our previous piece, it is also difficult to understand how content moderation practices differentially impact certain communities of users. Companies do not share substantive data around the efficacy of their content moderation algorithms, especially across different languages and content policy areas. Because internet platforms provide so little transparency around how they enforce content moderation practices across regions, and what impact these efforts have on online speech, reports and anecdotes from civil society, journalists, and users are increasingly important to identifying problems and trends.

Functional discrimination

One of the key trends that emerged during our interviews was that content moderation systems can enable functional discrimination. Several interviewees noted that although internet platforms share information about their content policies, privacy policies, and appeals processes online, this information is not always readily accessible in languages such as Arabic. It is also often hard to find. Others noted that when information is available in their language, it is often difficult to understand or badly translated. This prevents users, researchers, and others from effectively understanding the rules governing the platforms they are trying to use, and from advocating for their own rights in the content moderation ecosystem. For example, earlier this year, TikTok deleted the account of Palestinian news network QNN. The outlet’s editor Ahmad Jarrar told Vice that he found it difficult to understand the platform’s moderation policies, and was only able to regain access to the account after issuing a press release on the situation. Jarrar told Vice that even once the account was reinstated, the platform did not share further information on why it had been removed.

Misunderstanding linguistic and cultural nuance

During our interviews, we tried to make sense of the growing pattern of unexplained content and account enforcement actions that many Arabic-speaking and Middle East-based users have been subject to. In many cases, the patterns of disproportionate moderation of MENA social media users reflect linguistic and cultural dynamics.

Arabic, like most languages, exists on a spectrum of diglossia in which a variety of regional dialects and accents operate primarily in the spoken context, while a single, standardized written language is generally used as a go-between and for more formal communication, including media and political speech. However, Arabic is more diglossic than most languages, since the unifying Modern Standard Arabic (the written dialect primarily used in political and journalistic communication and for training of natural language processing) varies quite significantly from its spoken dialects. These dialects have incredibly complex and distinct regional variations, each with distinct slang and colloquial speech. Social media platforms reflect numerous degrees of colloquialism in speech; as a result, Arabic colloquial dialects are much less likely to be standardized or recognized by the translation algorithms of platforms like Facebook, which rely almost entirely on artificial intelligence and are relatively new. It seems likely, then, that a good deal of speech and content posted on platforms by Arabic-speaking users will be misunderstood — particularly if that speech is at all funny, impassioned, excitable, angry, or emotional (as colloquial speech tends to be).

In addition, since Arabic is a “voweled” language and many words in their non-voweled forms appear identical to the untrained eye, there is an increased risk of completely disastrous mistranslations. These are something of a common joke among Arabic speakers and translators (although the results, of course, can be anything but funny). A Libyan academic, for example, told us that she and another Libyan writer were communicating via Twitter and used a colloquial word that roughly translates to “idiot.” The post was flagged and removed by Twitter with no explanation.

In addition, many dialects of Arabic contain slang or colloquial expressions that, as in many languages, use violent or weaponized language for levity or to convey feeling, such as the English expression to “bomb” something, like a test, i.e. to have done poorly. Egyptian Arabic alone contains at least a few of these expressions, including the colloquial phrase تدي هدا بوبوا” — literally, to “give someone a bomb,” or to mess something up for someone or make a mistake. Similarly, a member of civil society also noted during our interviews that a Saudi Arabic-speaking user had their post on Twitter referring to a goal in a soccer match removed, likely because the colloquial word for goal in his dialect roughly translates to “missile.” Such expressions are extremely common in Arabic, as they are in many languages. Several interviewees spoke of the translation mishaps which mean, in the context of a region that is already made hyper aware of the potential for violence or physical threats, that Arabic-speaking social media users could feel required to police their online speech at all times. These anecdotes speak to the limitation of automated content moderation tools and human content moderators in understanding nuances and regional specificities in human speech. In a world where both Muslims and people of Middle Eastern descent are highly likely to be profiledor surveilled in public spaces as a potential threat, this reality enforces existing racialized misconceptions and the consequences of existing inequalities.

Government influence

Many civil society organizations with which we spoke detailed how their content and work was routinely subject to disproportionately detrimental treatment on social media platforms, particularly where such content or speech intersected with political unrest or contested government authority. A Syrian journalist shared that his Facebook account — and those of many other Syrian journalists and activists opposed to Bashar al-Assad’s government — had been repeatedly deleted or deactivated without any formal explanation by the company and with very little means for recourse. Requests to appeal decisions like these are not always available in languages like Arabic, and the lower volume of Arabic-proficient staff means that such processes tend to go slower and be less well executed. The same journalist explained how he had attempted to flag and report the accounts of Syrian regime-affiliated journalists, who sometimes posted and kept online graphic and violent images of slaughtered Syrian civilians. These posts were in clear violation of Facebook’s Community Standards, but were allowed to stay up.

Evading moderation

Many interviewees discussed how long-standing patterns of unexplained deletion of content have shaped how Middle Eastern users, particularly journalists and activists, share and engage with information on social media platforms. A Palestinian journalist explained it is well established among Palestinians that writing certain words on Facebook in Arabic, including “protest,” “occupation,” or “Zionism,” is likely to trigger an automatic takedown. Writers and activists, then, have learned to use such terms in coded ways that automated tools are less likely to recognize. Other scholars and researchers, many of whom engage directly with companies like Facebook and Twitter in mitigating instances of potential discrimination online, confirmed that these patterns of poorly managed content moderation exist. They said that workarounds are common, including the creation of multiple accounts and writing with special characters or replacement words to avoid moderation and deletion.

Advocacy challenges

When we asked our interviewees how they navigate the complex content moderation landscape, many underscored the fact that conducting advocacy around these issues is challenging due to imbalances in how social media companies approach public policy relationships and stakeholder management in the MENA region. While some social media companies such as Facebook and Twitter have regional offices in the United Arab Emirates, many companies do not have such a presence. As a result, advocates in the region do not always have a clear line of communication with companies with which they can raise concerns, including those shared by users, and solicit information. One advocate noted the vast differences in whether and how companies engage with MENA-based stakeholders. This often leaves advocates unaware of how to adequately document cases of content moderation errors or censorship, or how to establish fruitful relationships with users subject to enforcement actions in a manner that can result in tangible change. Finally, some interviewees raised concerns around how geopolitical power imbalances in the MENA region have influenced company outreach and public policy efforts in a manner that skews toward certain governments and their online agendas.

A way forward

In our final blog of this series, we will discuss potential policy, transparency, and design solutions that internet platforms can incorporate to address many of the issues outlined in this series.

 

In this series, published jointly with New America's Open Technology Institute, we examine how content moderation and social media policies and practices intersect with regional issues in the Middle East, and how these linkages can influence security, civil liberties, and human rights across the region and beyond. 

Eliza Campbell is the director of MEI's Cyber Program. 

Spandana Singh is a policy analyst with New America's Open Technology Institute, a Fellow at and the Vice President of the Internet Law & Policy Foundry, as well as a Non-Resident Fellow at the Esya Centre in New Delhi. The views expressed in this piece are their own.

Photo by Rasit Aydogan/Anadolu Agency via Getty Images


[1] Unless otherwise specified, all interviewees spoke to us on condition of anonymity


The Middle East Institute (MEI) is an independent, non-partisan, non-for-profit, educational organization. It does not engage in advocacy and its scholars’ opinions are their own. MEI welcomes financial donations, but retains sole editorial control over its work and its publications reflect only the authors’ views. For a listing of MEI donors, please click here.