Shadow banning

Last updated

Shadow banning, also known as stealth banning, hell banning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.

Contents

The phrase "shadow banning" has a colloquial history and has undergone some evolution of usage. It originally applied to a deceptive sort of account suspension on web forums, where a person would appear to be able to post while actually having all of their content hidden from other users. In 2022, the term has come to apply to alternative measures, particularly visibility measures like delisting and downranking. [1]

By partly concealing, or making a user's contributions invisible or less prominent to other members of the service, the hope may be that in the absence of reactions to their comments, the problematic or otherwise out-of-favour user will become bored or frustrated and leave the site, and that spammers and trolls will be discouraged [2] from continuing their unwanted behavior or creating new accounts. [3] [4]

History

In the mid-1980s, BBS forums including Citadel BBS software had a "twit bit" for problematic users [3] [5] which, when enabled, would limit the user's access while still allowing them to read public discussions; however, any messages posted by that "twit" would not be visible to the other members of that group. [3] [6]

The term "shadow ban" is believed to have originated with moderators on the website Something Awful in 2001, although the feature was only used briefly and sparsely. [3]

Michael Pryor of Fog Creek Software described stealth banning for online forums in 2006, saying how such a system was in place in the project management system FogBugz, "to solve the problem of how do you get the person to go away and leave you alone". As well as preventing problem users from engaging in flame wars, the system also discouraged spammers, who if they returned to the site would be under the false impression that their spam was still in place. [4] The Verge describes it as "one of the oldest moderation tricks in the book", noting that early versions of vBulletin had a global ignore list known as "Tachy goes to Coventry", [7] as in the British expression "to send someone to Coventry", meaning to ignore them and pretend they do not exist.

A 2012 update to Hacker News introduced a system of "hellbanning" for spamming and abusive behavior. [8] [9]

Early on, Reddit implemented (and continues to practice) [10] shadow banning, purportedly to address spam accounts. [11] In 2015, Reddit added an account suspension feature that was said to have replaced its sitewide shadowbans, though moderators can still shadowban users from their individual subreddits via their AutoModerator configuration [12] as well as manually. A Reddit user was accidentally shadow banned for one year in 2019, subsequently they contacted support and their comments were restored. [13]

A study of tweets written in a one-year period during 2014 and 2015 found that over a quarter million tweets had been censored in Turkey via shadow banning. [14] Twitter was also found, in 2015, to have shadowbanned tweets containing leaked documents in the US. [15] [16]

Craigslist has also been known to "ghost" a user's individual ads, whereby the poster gets a confirmation email and may view the ad in their account, but the ad fails to show up in the appropriate category page. [17]

WeChat was found in 2016 to have banned, without any notification to the user, posts and messages that contain various combinations of at least 174 keywords, including "习包子" (Xi Bao Zi), "六四天安门" (June 4 Tiananmen), "藏青会" (Tibetan Youth Congress), and "ئاللاھ يولىدا" (in the way of Allah). [18] [19] Messages containing these keywords would appear to have been sent successfully but would not be visible on the receiving end.

In 2017, the phenomenon was noticed on Instagram, with posts which included specific hashtags not showing up when those hashtags were used in searches. [20] [21] [22]

In December 2023, Human Rights Watch echoed the complaints of many Instagram and Facebook users who alleged a drastic reduction in visits to their posts and profiles when the content they posted was about Palestine or the Gaza Genocide, without prior notification from Meta. [23] [24] The Markup's investigation confirmed that posts with war-related imagery or pro-Palestine hashtags were demoted. Hashtags like "#Palestine" or "#AlAqsa" were supressed from the "Top Posts" section. [24] Meta responded by claiming that this was due to a bug on the platform, which led to criticism about possible biases in the algorithm. [25]

Drawbacks

Given that shadow bans are mostly executed by automatic algorithms without initial human intervention, [26] and that the conditions for imposing them can be quite complex, there is always a percentage of false positives where a user is shadow banned even when the user did nothing wrong.

Because the shadow ban happens without the user being informed, the incorrectly banned user will not have a chance to contact the platform to revert it, unless the user finds out by their own means.

Another instance where shadow bans are problematic is when a user actually broke a rule, but it was in an unintended way, or in a way that did not imply bad intention nor was damaging for the community. For example, a user wrote a comment in an online platform, and the comment contained an URL to some legit not spamming source. The algorithm patrolling comments, instead of informing the user that URLs were not allowed and preventing the user from posting, would let the user post the comment with the URL but hiding the comment for everyone to see except the original poster.[ citation needed ]

If the user was informed beforehand about the rules broken, the user would have been able to write an compliant comment and avoid the shadow ban.

A wrongful ban is always undesired regardless of the reason, and creates an erosion of trust that disincentivises the user from further engaging with the platform. This is a negative loss when the user shadow banned was a good contributor to the platform.[ citation needed ]

Legality

Although shadow banning can be an effective moderation tool, it can also have legal implications. If the platform implementing shadow banning does not mention such practice in their terms and conditions, it could effectively mean that the platform denied a service for no disclosed reason, and hence being in breach of contract.

In the European Union, the Digital Services Act (DSA) contains Article 17 [27] that directly addresses moderation practices and service restrictions, forcing platforms to disclose the reasons for such restrictions:

Providers of hosting services shall provide a clear and specific statement of reasons to any affected recipients of the service for any of the following restrictions imposed on the ground that the information provided by the recipient of the service is illegal content or incompatible with their terms and conditions.

In 2024 a Dutch user of Twitter, under the DSA, sued this platform with the European small claims procedure in the Amsterdam District Court for breach of contract, and won the case. [28] [29] The plaintiff claimed that under the Article 17 DSA Twitter failed to proactively notify and provide a “clear and specific statement of reasons” for the demotion of his account, as it is required by this article. [30] On its defence, Twitter claimed that in its terms and conditions there were clauses that allow them to modify access to functionalities and other obligations at any time. But the court deemed these clauses non-binding under the Unfair Terms in Consumer Contracts Directive, and hence dismissed its defence. [29]

Another legal implication is a perceived violation of freedom of speech, depending on how this principle is codified in regulations around the world. In the European Union the DSA effectively bans shadow banning because Article 17 requires platforms to always disclose the reasons for a ban or restriction. In practice however this is not enforced most of the time. [31] [32] [33] Conversely, the First Amendment to the United States Constitution does not protect users' freedom of speech from shadow banning because its coverage only applies to American government interference, not to third-party private entities (like social networks). [34] [35]

Controversies

Political controversies

"Shadow banning" became popularized in 2018 as a conspiracy theory when Twitter shadow-banned some Republicans. [36] In late July 2018, Vice News found that several supporters of the US Republican Party no longer appeared in the auto-populated drop-down search menu on Twitter, thus limiting their visibility when being searched for; Vice News alleged that this was a case of shadow-banning. [36] [37] After the story, some conservatives accused Twitter of enacting a shadowban on Republican accounts, a claim which Twitter denied. [38] However, some accounts that were not overtly political or conservative apparently had the same algorithm applied to them. [39] Numerous news outlets, including The New York Times , The Guardian , Buzzfeed News , Engadget and New York magazine, disputed the Vice News story. [38] [40] [41] [42] [43] [44] In a blog post, Twitter said that the use of the phrase "shadow banning" was inaccurate, as the tweets were still visible by navigating to the home page of the relevant account. [45] In the blog post, Twitter claims it does not shadow ban by using "the old, narrow, and classical" definition of shadow banning. [46] Later, Twitter appeared to have adjusted its platform to no longer limit the visibility of some accounts. [47] In a research study that examined more than 2.5 million Twitter profiles, it was discovered that almost one in 40 had been shadowbanned by having their replies hidden or having their handles hidden in searches. [48] [49]

During the 2020 Twitter account hijackings, hackers successfully managed to obtain access to Twitter's internal moderation tools via both social engineering and bribing a Twitter employee. [50] Through this, images were leaked of an internal account summary page, which in turn revealed user "flags" set by the system that confirmed the existence of shadow bans on Twitter. Accounts were flagged with terms such as "Trends Blacklisted" and "Search Blacklisted" implying that the user was not able to publicly trend, or show up in public search results. After the situation was dealt with, Twitter faced accusations of censorship with claims that they were trying to hide the existence of shadow bans by removing tweets that contained images of the internal staff tools used. However, Twitter claimed they were removed as they revealed sensitive user information. [51]

On December 8, 2022, the second thread of the Twitter Files—a series of Twitter threads based on internal Twitter, Inc. documents shared by owner Elon Musk with independent journalists Matt Taibbi and Bari Weiss—addressed a practice referred to as "visibility filtering" by previous Twitter management. The functionality included tools allowing accounts to be tagged as "Do not amplify", and under "blacklists" that reduce their prominence in search results and trending topics. It was also revealed that certain conservative accounts, such as the far-right Libs of TikTok, had been given a warning stating that decisions regarding them should only be made by Twitter's Site Integrity Policy, Policy Escalation Support (SIP–PES) team—which consists primarily of high-ranking officials. The functions were given by Musk and other critics as examples of "shadow banning". [52] [53] [54]

Conspiracy theories

A form of conspiracy theory has become popular in which a social media content creator suggests that their content has been intentionally suppressed by a platform which claims not to engage in shadow banning. [55] Platforms frequently targeted by these accusations include Twitter, [3] Facebook, [41] YouTube and Instagram. [55]

To explain why users may come to believe they are subject to "shadow bans" even when they are not, Elaine Moore of the Financial Times writes: [55]

Like Uber drivers and Deliveroo couriers, social media influencers are at the mercy of algorithms. This makes them perfect fodder for conspiracy theories. It also makes sense that influencers would be baffled by any sudden decrease in engagement and spooked by changes that might jeopardise the brand deals they sign. Instead of believing that their own popularity is waning, some cling to the idea that shadowbans are a disciplinary measure that is used against creators who do not warrant an outright ban from a platform.

See also

References

  1. Leerssen, Paddy. "An End to Shadow Banning? Transparency rights in the Digital Services Act between content moderation and curation". osf.io: 3. doi:10.31219/osf.io/7jg45 . Retrieved 2022-12-11.
  2. Thompson, Clive (29 March 2009). "Clive Thompson on the Taming of Comment Trolls". Wired magazine. Archived from the original on 2015-08-05. Retrieved 6 July 2014.
  3. 1 2 3 4 5 Cole, Samantha (31 July 2018). "Where Did the Concept of 'Shadow Banning' Come From?". Motherboard. Retrieved 1 August 2018.
  4. 1 2 Robert Walsh (12 January 2006). Micro-ISV: From Vision to Reality. Apress. p. 183. ISBN   978-1-4302-0114-4. So one of the things we did in FogBugz to solve the problem of how do you get the person to go away and leave you alone is, well, you take their post and make it invisible to everyone else, but they still see it. They won't know they've been deleted. There's no one fanning their flame. You can't get into a flame war if no one responds to your criticism. So they get silenced and eventually just go away. We have several ways of telling if they come back, and it's been proven to be extremely, extremely effective. Say a spammer posts to your board and then they come back to check if it's still there, and they see it—to them it's still there—but no one else sees it, so they're not bothered by it.
  5. Atwood, Jeff (4 June 2011). "Suspension, Ban or Hellban?". Coding Horror blog. Archived from the original on 30 December 2011. Retrieved 17 December 2011.
  6. "Manual installation of Citadel using source code and the command line client - Citadel.org". www.citadel.org. Retrieved 20 December 2020.
  7. Bohn, Dieter (16 February 2017). "One of Twitter's new anti-abuse measures is the oldest trick in the forum moderation book". The Verge. Retrieved 17 February 2017.
  8. Leena Rao (18 May 2013). "The Evolution of Hacker News". TechCrunch. Retrieved 10 August 2014.
  9. "Can the democratic power of a platform like Hacker News be applied to products?". Pando. 4 December 2013. Archived from the original on 4 Dec 2018. Retrieved 1 August 2018.
  10. Shah, Saqib (12 May 2023). "What is a 'shadow ban'? Lizzo claims TikTok is shutting her videos out of its algorithm". The Standard. Retrieved 8 February 2025.
  11. krispykrackers. "On shadowbans. • r/self". Reddit .
  12. Shu, Catherine (November 11, 2015). "Reddit Replaces Its Confusing Shadowban System With Account Suspensions". TechCrunch. Retrieved 16 September 2017.
  13. Gerken, Tom (2019-04-11). "The Redditor who accidentally spent a year talking to himself". BBC News . Retrieved 2023-03-13.
  14. Tanash, Rima S.; Chen, Zhouhan; Thakur, Tanmay; Wallach, Dan S.; Subramanian, Devika (1 January 2015). "Known Unknowns". Proceedings of the 14th ACM Workshop on Privacy in the Electronic Society. WPES '15. New York, NY, USA: ACM. pp. 11–20. doi:10.1145/2808138.2808147. ISBN   9781450338202. S2CID   207229086.
  15. Ohlheiser, Abby (30 October 2015). "Tweets are disappearing on Twitter. Why?". The Washington Post. ISSN   0190-8286 . Retrieved 29 April 2017.
  16. Pearson, Jordan (October 19, 2015). "Is Twitter Censoring a Blockbuster Report on US Drone Assassinations?". Motherboard . Retrieved 26 July 2017.
  17. Weedmark, David. "How to Prevent Ghost Posting on Craigslist". Chron.com.
  18. Ruan, Lotus; Knockel, Jeffrey; Ng, Jason Q.; Crete-Nishihata, Masashi (30 November 2016). "One App, Two Systems: How WeChat uses one censorship policy in China and another internationally". The Citizen Lab. Retrieved 29 April 2017.
  19. Doctorow, Cory (December 2, 2016). "China's We Chat "shadow-bans" messages with forbidden keywords, but only for China-based accounts". Boing Boing. Retrieved 29 April 2017.
  20. Lorenz, Taylor (7 June 2017). "Instagram's "shadowban," explained: How to tell if Instagram is secretly blacklisting your posts". Mic Network Inc. Retrieved 4 November 2017.
  21. Wong, Kristin (April 23, 2017). "How to See If Your Instagram Posts Have Been Shadowbanned". Lifehacker. Retrieved 4 November 2017.
  22. "Photographers Claim Instagram is 'Shadow Banning' Their Accounts". PetaPixel. 28 March 2017. Retrieved 26 April 2017.
  23. Younes, Rasha (2023-12-21). "Meta's Broken Promises". Human Rights Watch.
  24. 1 2 Apodaca, Tomas; Uzcátegui-Liggett, Natasha (2024-02-25). "How We Investigated Shadowbanning on Instagram – The Markup". themarkup.org. Retrieved 2025-08-21.
  25. "Meta responds to allegations of Instagram shadow-ban for pro-Palestine content". Business & Human Rights Resource Centre. Retrieved 2025-08-21.
  26. Tarleton Gillespie (August 19, 2022). "Do Not Recommend? Reduction as a Form of Content Moderation". Social Media + Society. 8 (3) 20563051221117552. Sage. doi: 10.1177/20563051221117552 . Major platforms [...] have added reduction to their content moderation techniques. They use machine learning classifiers to identify content that is misleading enough, risky enough, and problematic enough to warrant reducing its visibility
  27. "Article 17, Digital Services Act". EUR-Lex, European Union. October 19, 2022.
  28. ECLI:NL:RBAMS:2024:3980(Amsterdam District Court5 July 2024), Text .
  29. 1 2 "The DSA's first shadow banning case". Digital Services Act (DSA) Observatory. August 6, 2024.
  30. Paddy Leerssen (April 2023). "An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation". Computer Law & Security Review. 48 105790. Elsevier. doi: 10.1016/j.clsr.2023.105790 .
  31. "Shadow Banning, Content Moderation, Competition Law, and Free Speech: Navigating the Crossroads". Euro Prospects. December 30, 2024.
  32. Jan Polański (April 14, 2023). "Antitrust shrugged? Boycotts, content moderation, and free speech cartels" . European Competition Journal. 19 (2). Taylor & Francis: 334–358. doi:10.1080/17441056.2023.2200612.
  33. Yen-Shao Chen, Tauhid Zaman (March 27, 2024). "Shaping opinions in social networks with shadow banning". PLOS ONE. 19 (3): e0299977. arXiv: 2310.20192 . Bibcode:2024PLoSO..1999977C. doi: 10.1371/journal.pone.0299977 . PMC   10971755 . PMID   38536798.{{cite journal}}: CS1 maint: article number as page number (link)
  34. Michael Miller (November 24, 2024). "Shadow banning: Are social networks suppressing political content?". University of Cincinnati. Blevins noted that the First Amendment protects speech from government intrusion, not a third-party company's actions.
  35. Ari Cohn (February 21, 2025). "The FTC is overstepping its authority — and threatening free speech online". Foundation for Individual Rights and Expression.
  36. 1 2 Romano, Aja (6 September 2018). "How hysteria over Twitter shadow-banning led to a bizarre congressional hearing". Vox.
  37. Thompson, Alex (26 July 2018). "Twitter appears to have fixed "shadow ban" of prominent Republicans like the RNC chair and Trump Jr.'s spokesman". Vice News. Retrieved 15 August 2018.
  38. 1 2 Stack, Liam (26 July 2018). "What Is a 'Shadow Ban,' and Is Twitter Doing It to Republican Accounts?". The New York Times.
  39. Eordogh, Fruzsina (31 July 2018). "Why Republicans Weren't The Only Ones Shadow Banned On Twitter".
  40. Warzel, Charlie (July 26, 2018). "Twitter Isn't Shadow-Banning Republicans. Here's Why". BuzzFeed News. Retrieved 13 February 2019.
  41. 1 2 Feldman, Brian (25 July 2018). "Twitter Is Not 'Shadow Banning' Republicans". Intelligencer. Retrieved 13 February 2019.
  42. Wilson, Jason (27 July 2018). "What is 'shadow banning', and why did Trump tweet about it?". The Guardian. ISSN   0261-3077 . Retrieved 13 February 2019.
  43. Swapna Krishna (26 July 2018). "Twitter says supposed 'shadow ban' of prominent Republicans is a bug". Engadget. Retrieved 13 February 2019.
  44. Laura Hazard Owen (27 July 2018). "Twitter's not "shadow banning" Republicans, but get ready to hear that it is". Nieman Lab. Retrieved 13 February 2019.
  45. "Setting the record straight on shadow banning". Twitter. Retrieved 8 September 2018.
  46. "Shedding Light on Shadowbanning". Center for Democracy and Technology. 26 April 2022. p. 37. Retrieved 2022-12-11.
  47. Alex Thompson (26 July 2018). "Twitter appears to have fixed search problems that lowered visibility of GOP lawmakers". Vice. Retrieved 28 March 2019.
  48. Le Merrer, Erwan; Morgan, Benoit; Trédan, Gilles (2020-12-09). "Setting the Record Straighter on Shadow Banning". arXiv: 2012.05101 [cs.SI].
  49. Nicholas, Gabriel (2022-04-28). "Shadowbanning Is Big Tech's Big Problem". The Atlantic. Retrieved 2022-12-09.
  50. Cox, Joseph (15 July 2020). "Hackers Convinced Twitter Employee to Help Them Hijack Accounts". Vice.com. Vice. Retrieved 21 January 2024.
  51. Nathaniel, Popper; Kate, Conger (17 July 2020). "Hackers Tell the Story of the Twitter Attack From the Inside". The New York Times. Retrieved 21 January 2024.
  52. Shapero, Julia (8 December 2022). "Former NYT columnist Bari Weiss releases 'Twitter Files Part Two'". The Hill . Retrieved 9 December 2022.
  53. Urquhart, Evan (2022-12-09). "The Anti-Trans Hate Account That Bari Weiss Says Is Yet Another Right-Wing Voice Censored by Twitter". Slate Magazine. Retrieved 2022-12-09.
  54. "The Twitter Files, Part Two, Explained". Gizmodo. 2022-12-09. Retrieved 2022-12-09.
  55. 1 2 3 Moore, Elaine (13 March 2022). "The truth about 'shadowbanning' is more complicated than influencers think". Financial Times. Retrieved 12 June 2022.