In social media, algospeak is a self-censorship phenomenon in which users adopt coded expressions to evade automated content moderation. [1] [2] It is used to discuss topics deemed sensitive to moderation algorithms while avoiding penalties such as shadow banning, downranking, or de-monetization of content. [3] Algospeak is different from other types of netspeak in that its primary purpose is to avoid censorship, rather than to create a communal identity, though it may still be used for such end. [3] Algospeak has been identified as a driver of linguistic change in the modern era, even influencing communication outside the Internet, such as the word unalive encroaching on real-life speech.
The term algospeak–a blend of Algorithm and -speak [4] –appear to date back to 2021, though related ideas have existed for much longer; for example, voldemorting, [5] referencing the fictional character also known as "He-Who-Must-Not-Be-Named", refers to the use of coded expressions to avoid giving attention to objectionable figures or platforms and receiving algorithmic attention from unwanted audiences. [6] [7] [8] The term algospeak gained wider recognition in 2022 after Taylor Lorenz featured it in an article for The Washington Post . [9] [ page needed ] In 2025, Adam Aleksic published Algospeak , the first monograph dedicated to the phenomenon. It proposes an expanded definition which encompasses any language change that is primarily driven by the constraints of digital platforms. [10] [11] [12]
Many social media platforms rely on automated content moderation systems to enforce their guidelines, which are often not determined by users themselves. [1] TikTok in particular uses artificial intelligence to proactively moderate content, in addition to responding to user reports and using human moderators. In colloquial usage, such AIs are called "algorithms" or "bots". TikTok has faced criticism for their unequal enforcement on topics such as LGBTQ people and obesity. This led to a perception that social media moderation is contradictory and inconsistent. [3]
Between July and September 2024, TikTok reported removing 150 million videos, 120 million of which were flagged by automated systems. [13] Automated moderation may miss important context; for example, benign communities who aid people who struggle with self-harm, suicidal thoughts, or past sexual violence may inadvertently receive unwarranted penalties. [14] [15] [3] TikTok users have used algospeak to discuss and provide support to those who self-harm. [16] An interview with nineteen TikTok creators revealed that they felt TikTok's moderation lacked contextual understanding, appeared random, was often inaccurate, and exhibited bias against marginalized communities. [3]
Algospeak is also used in communities promoting harmful behaviors. Anti-vaccination Facebook groups began renaming themselves to “dance party” or “dinner party” to avoid being flagged for misinformation. Likewise, communities that encourage the eating disorder anorexia nervosa have been employing algospeak. [17] Euphemisms like "cheese pizza" and "touch the ceiling" are used to promote child sexual abuse material (CSAM). [18]
On TikTok, moderation decisions can result in consequences such as account bans and deletion or delisting of videos from the main video discovery page, called the "For You" page. In response, a TikTok spokeswoman told The New York Times that the users' fears are misplaced, saying that many popular videos discuss sex-adjacent topics. [19]
Algospeak uses techniques akin to those used in Aesopian language to conceal the intended meaning from automated content filters, while being understandable to human readers. [2] Other similar adoption of obfuscated speech include Cockney rhyming slang and Polari, which historically were used by London gangs and British gay men respectively. [14] However, unlike other forms of obfuscated speech, the global reach of social media has allowed the language to spread beyond local settings. [2]
Techniques used in algospeak are extremely diverse. Users may draw from leetspeak, where letters are replaced with lookalike characters (e.g. $3X for sex). [2] Certain words may be censored, or in the case of auditory media, cut off or bleeped, [a] e.g., s*icide instead of suicide. Another involves "pseudo-substitution", where an item is censored in one form, while it is present in another form at the same time, as used in videos. [20] Some may involve intersemiotic translation, where non-linguistic signs are interpreted linguistically, in addition to further obfuscation. For example, the corn emoji "🌽" signifies pornography by means of porn → corn → 🌽. [2] Others may rely on phonological similarity or variation, such as homophobic → hydrophobic, and sexy → seggsy via intervocalic voicing. [21]
In an interview study, most creators that were interviewed suspected TikTok's automated moderation was scanning the audio as well, leading them to also use algospeak terms in speech. Some also label sensitive images with innocuous captions using algospeak, such as captioning a scantily-dressed body as "fake body". [3] The use of gestures and emojis are common in algospeak, showing that it is not limited to written communication. [22] A notable example is the use of the watermelon emoji on social media as a Pro-Palestinian symbol in place of the Palestinian flag in order to avoid censorship by Facebook and Instagram. [23]
A 2022 poll showed that nearly a third of American social media users reported using "emojis or alternative phrases" to subvert content moderation. [18]
Algospeak can lead to misunderstandings. A high-profile incident occurred when American actress Julia Fox made a seemingly unsympathetic comment on a TikTok post mentioning "mascara", not knowing its obfuscated meaning of sexual assault. Fox later apologized for her comment. [14] [24] In an interview study, creators shared that the evolving nature of content moderation pressures them to constantly innovate their use of algospeak, which makes them feel less authentic. [22]
A 2024 study showed that GPT-4, a large language model, can often identify and decipher algospeak, especially with example sentences. [25] Another study shows that sentiment analysis models often rate negative comments incorporating simple letter–number substitution and extraneous hyphenation more positively. [26]
According to New York Times : [19]
Other examples: [29] [3] [30] [31]
{{cite web}}
: CS1 maint: numeric names: authors list (link)