On websites that allow users to create content, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves. [1]
Various types of Internet sites permit user-generated content such as posts, comments, videos including Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, they will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.
Major platforms use a combination of algorithmic tools, user reporting and human review. [1] Social media sites may also employ content moderators to manually flag or remove content flagged for hate speech or other objectionable content. Other content issues include revenge porn, graphic content, child abuse material and propaganda. [1] Some websites must also make their content hospitable to advertisements. [1]
In the United States, content moderation is governed by Section 230 of the Communications Decency Act, and has seen several cases concerning the issue make it to the United States Supreme Court, such as the current Moody v. NetChoice, LLC.
Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community. [2]
Commercial Content Moderation is a term coined by Sarah T. Roberts to describe the practice of "monitoring and vetting user-generated content (UGC) for social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context." [3]
The content moderation industry is estimated to be worth US$9 billion. While no official numbers are provided, there are an estimates 10,000 content moderators for TikTok; 15,000 for Facebook and 1,500 for Twitter as of 2022. [4]
The global value chain of content moderation typically includes social media platforms, large MNE firms and the content moderation suppliers. The social media platforms (e.g Facebook, Google) are largely based in the United States, Europe and China. The MNEs (e.g Accenture, Foiwe) are usually headquartered in the global north or India while suppliers of content moderation are largely located in global southern countries like India and the Philippines. [5] : 79–81
While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved through outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap. [6]
Employees work by viewing, assessing and deleting disturbing content. [7] Wired reported in 2014, they may suffer psychological damage [8] [9] [10] [2] [11] In 2017, the Guardian reported secondary trauma may arise, with symptoms similar to PTSD. [12] Some large companies such as Facebook offer psychological support [12] and increasingly rely on the use of artificial intelligence to sort out the most graphic and inappropriate content, but critics claim that it is insufficient. [13] In 2019, NPR called it a job hazard. [14] Non-disclosure agreements are the norm when content moderators are hired. This makes moderators more hesitant to speak up about working conditions or organize. [4]
Psychological hazards including stress and post-traumatic stress disorder, combined with the precarity of algorithmic management and low wages make content moderation extremely challenging. [15] : 123 The number of tasks completed, for example labeling content as copyright violation, deleting a post containing hate-speech or reviewing graphic content are quantified for performance and quality assurance. [4]
In February 2019, an investigative report by The Verge described poor working conditions at Cognizant's office in Phoenix, Arizona. [16] Cognizant employees tasked with content moderation for Facebook developed mental health issues, including post-traumatic stress disorder, as a result of exposure to graphic violence, hate speech, and conspiracy theories in the videos they were instructed to evaluate. [16] [17] Moderators at the Phoenix office reported drug abuse, alcohol abuse, and sexual intercourse in the workplace, and feared retaliation from terminated workers who threatened to harm them. [16] [18] In response, a Cognizant representative stated the company would examine the issues in the report. [16]
The Verge published a follow-up investigation of Cognizant's Tampa, Florida, office in June 2019. [19] [20] Employees in the Tampa location described working conditions that were worse than the conditions in the Phoenix office. [19] [21] [22]
Moderators were required to sign non-disclosure agreements with Cognizant to obtain the job, although three former workers broke the agreements to provide information to The Verge. [19] [23] In the Tampa office, workers reported inadequate mental health resources. [19] [24] As a result of exposure to videos depicting graphic violence, animal abuse, and child sexual abuse, some employees developed psychological trauma and post-traumatic stress disorder. [19] [25] In response to negative coverage related to its content moderation contracts, a Facebook director indicated that Facebook is in the process of developing a "global resiliency team" that would assist its contractors. [19]
Facebook had increased the number of content moderators from 4,500 to 7,500 in 2017 due to legal requirements and other controversies. In Germany, Facebook was responsible for removing hate speech within 24 hours of when it was posted. [26] In late 2018, Facebook created an oversight board or an internal "Supreme Court" to decide what content remains and what content is removed. [14]
According to Frances Haugen, the number of Facebook employees responsible for content moderation was much smaller as of 2021. [27]
Social media site Twitter has a suspension policy. Between August 2015 and December 2017, it suspended over 1.2 million accounts for terrorist content to reduce the number of followers and amount of content associated with the Islamic State. [28] Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech. [29] However, the effects of this campaign have been called into question. [30] [31]
User moderation allows any user to moderate any other user's contributions. Billions of people are currently making decisions on what to share, forward or give visibility to on a daily basis. [32] On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community.
User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breaches community standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at. [33]
150 content moderators, who contracted for Meta, ByteDance and OpenAI gathered in Nairobi, Kenya to launch the first African Content Moderators Union on 1 May 2023. This union was launched 4 years after Daniel Motaung was fired and retaliated against for organizing a union at Sama, which contracts for Facebook. [34]
Accenture plc is a global multinational professional services company originating in the United States and headquartered in Dublin, Ireland, that specializes in information technology (IT) services and management consulting. A Fortune Global 500 company, it reported revenues of $64.9 billion in 2024.
Cognizant Technology Solutions Corporation is an American multinational information technology services and consulting company. It is headquartered in Teaneck, New Jersey, U.S. Cognizant is part of the NASDAQ-100 and trades under CTSH. It was founded in Chennai, India, as an in-house technology unit of Dun & Bradstreet in 1994, and started serving external clients in 1996. After a series of corporate reorganizations, there was an initial public offering in 1998. Ravi Kumar S has been the CEO of the company since January 2023, replacing Brian Humphries.
Facebook is a social media and social networking service owned by American technology conglomerate Meta. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities. Since 2006, Facebook allows everyone to register from 13 years old, except in the case of a handful of nations, where the age limit is 14 years. As of December 2022, Facebook claimed almost 3 billion monthly active users. As of October 2023, Facebook ranked as the third-most-visited website in the world, with 22.56% of its traffic coming from the United States. It was the most downloaded mobile app of the 2010s.
Facebook has been the subject of criticism and legal action since it was founded in 2004. Criticisms include the outsize influence Facebook has on the lives and health of its users and employees, as well as Facebook's influence on the way media, specifically news, is reported and distributed. Notable issues include Internet privacy, such as use of a widespread "like" button on third-party websites tracking users, possible indefinite records of user information, automatic facial recognition software, and its role in the workplace, including employer-employee account disclosure. The use of Facebook can have negative psychological and physiological effects that include feelings of sexual jealousy, stress, lack of attention, and social media addiction that in some cases is comparable to drug addiction.
Parler is an American alt-tech social networking service associated with conservatives. Launched in August 2018, Parler marketed itself as a free speech-focused and unbiased alternative to mainstream social networks such as Twitter and Facebook. Journalists described Parler as an alt-tech alternative to Twitter, with its users including those banned from mainstream social networks or who oppose their moderation policies.
Social network advertising, also known as social media targeting, is a group of terms used to describe forms of online advertising and digital marketing that focus on social networking services. A significant aspect of this type of advertising is that advertisers can take advantage of users' demographic information, psychographics, and other data points to target their ads.
In the United States, Section 230 is a section of the Communications Act of 1934 that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by its users. At its core, Section 230(c)(1) provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Instagram is an American photo and video sharing social networking service owned by Meta Platforms. It allows users to upload media that can be edited with filters, be organized by hashtags, and be associated with a location via geographical tagging. Posts can be shared publicly or with preapproved followers. Users can browse other users' content by tags and locations, view trending content, like photos, and follow other users to add their content to a personal feed. A Meta-operated image-centric social media platform, it is available on iOS, Android, Windows 10, and the web. Users can take photos and edit them using built-in filters and other tools, then share them on other social media platforms like Facebook. It supports 32 languages including English, Hindi, Spanish, French, Korean, and Japanese.
Shadow banning, also called stealth banning, hell banning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.
Mastodon is an open source, self-hosted, social networking service. Mastodon uses the ActivityPub protocol for federation which allows users to communicate between independent Mastodon instances and other ActivityPub compatible services. Mastodon has microblogging features similar to Twitter, and is generally considered to be a part of the Fediverse.
Facebook has been involved in multiple controversies involving censorship of content, removing or omitting information from its services in order to comply with company policies, legal demands, and government censorship laws.
BitChute is an alt-tech video hosting service launched by Ray Vahey in January 2017. It describes itself as offering freedom of speech, while the service is known for hosting far-right individuals, conspiracy theorists, and hate speech. Some creators who use BitChute have been banned from YouTube; some others crosspost content to both platforms or post more extreme content only to BitChute. Before its deprecation, BitChute claimed to use peer-to-peer WebTorrent technology for video distribution, though this was disputed.
Deplatforming, also called no-platforming, is a form of Internet censorship of an individual or group by preventing them from posting on the platforms they use to share their information/ideas. This typically involves suspension, outright bans, or reducing spread.
Meta Portal is a discontinued brand of smart displays and videophones released in 2018 by Meta. The product line consists of four models: Portal, Portal+, Portal TV, and Portal Go. These models provide video chat via Messenger and WhatsApp, augmented by a camera that can automatically zoom and track people's movements. The devices are integrated with Amazon's voice-controlled intelligent personal assistant service Alexa.
MeWe is a global social media and social networking service. As a company based in Los Angeles, California it is also known as Sgrouples, Inc., doing business as MeWe. The site has been described as a Facebook alternative due to its focus on data privacy.
Alphabet Workers Union (AWU), also informally referred to as the Google Union, is an American trade union of workers employed at Alphabet Inc., Google's parent company, with a membership of over 800, in a company with 130,000 employees, not including temps, contractors, and vendors in the United States. It was announced on January 4, 2021 with an initial membership of over 400, after over a year of secret organizing, and the union includes all types of workers at Alphabet, including full-time, temporary, vendors and contractors of all job types.
Facebook and Meta Platforms have been criticized for their management of various content on posts, photos and entire groups and profiles. This includes but is not limited to allowing violent content, including content related to war crimes, and not limiting the spread of fake news and COVID-19 misinformation on their platform, as well as allowing incitement of violence against multiple groups.
Moderator Mayhem is a casual web-based video game designed by Engine, Randy Lubin, and Mike Masnick of Techdirt targeted towards policymakers. It was published in May 2023. The game is about the challenges of content moderation of user-generated content on social media.
The social media platform Meta Platforms services 3 billion users across its subsidiaries Facebook, Messenger, WhatsApp and Threads. Meta employs an estimated 60–80,000 employees as of 2023. Facebook subcontracts an additional estimated 15,000 content moderators around the world. The majority of unionized workers at Meta in the United States are subcontractors working as security guards, janitors, bus drivers and culinary staff. In Germany and Kenya, content moderators have formed unions and a works council in 2023.
YouTube moderation are a set of community guidelines aimed to reduce abuse of the site's features. The uploading of videos containing defamation, pornography, and material encouraging criminal conduct is forbidden by YouTube's "Community Guidelines". Generally prohibited material includes sexually explicit content, videos of animal abuse, shock videos, content uploaded without the copyright holder's consent, hate speech, spam, and predatory behavior. YouTube relies on its users to flag the content of videos as inappropriate, and a YouTube employee will view a flagged video to determine whether it violates the site's guidelines. Despite the guidelines, YouTube has faced criticism over aspects of its operations, its recommendation algorithms perpetuating videos that promote conspiracy theories and falsehoods, hosting videos ostensibly targeting children but containing violent or sexually suggestive content involving popular characters, videos of minors attracting pedophilic activities in their comment sections, and fluctuating policies on the types of content that is eligible to be monetized with advertising.