YouTube, a video sharing platform, has faced various criticisms over the years, particularly regarding content moderation, offensive content, and monetization. YouTube has faced criticism over aspects of its operations, [1] its recommendation algorithms perpetuating videos that promote conspiracy theories and falsehoods, [2] hosting videos ostensibly targeting children but containing violent or sexually suggestive content involving popular characters, [3] videos of minors attracting pedophilic activities in their comment sections, [4] and fluctuating policies on the types of content that is eligible to be monetized with advertising. [1]
YouTube has also been blocked by several countries. As of 2018, public access to YouTube was blocked by countries including China, North Korea, Iran, Turkmenistan, [5] Uzbekistan, [6] [7] Tajikistan, Eritrea, Sudan and South Sudan.
Controversial content has included material relating to Holocaust denial and the Hillsborough disaster, in which 96 football fans from Liverpool were crushed to death in 1989. [8] [9] In July 2008, the Culture and Media Committee of the House of Commons of the United Kingdom stated that it was "unimpressed" with YouTube's system for policing its videos, and argued that "proactive review of content should be standard practice for sites hosting user-generated content". YouTube responded by stating:
We have strict rules on what's allowed, and a system that enables anyone who sees inappropriate content to report it to our 24/7 review team and have it dealt with promptly. We educate our community on the rules and include a direct link from every YouTube page to make this process as easy as possible for our users. Given the volume of content uploaded on our site, we think this is by far the most effective way to make sure that the tiny minority of videos that break the rules come down quickly. [10] (July 2008)
In October 2010, U.S. Congressman Anthony Weiner urged YouTube to remove from its website videos of imam Anwar al-Awlaki. [11] YouTube pulled some of the videos in November 2010, stating they violated the site's guidelines. [12] In December 2010, YouTube added the ability to flag videos for containing terrorism content. [13]
In 2018, YouTube introduced a system that would automatically add information boxes to videos that its algorithms determined may present conspiracy theories and other fake news, filling the infobox with content from Encyclopædia Britannica and Wikipedia as a means to inform users to minimize misinformation propagation without impacting freedom of speech. [14] [15] The Wikimedia Foundation said in a statement that "neither Wikipedia nor the Wikimedia Foundation are part of a formal partnership with YouTube. We were not given advance notice of this announcement." [16]
In the wake of the Notre-Dame fire on April 15, 2019, several user-uploaded videos of the landmark fire were flagged by YouTube' system automatically with an Encyclopædia Britannica article on the false conspiracy theories around the September 11 attacks. Several users complained to YouTube about this inappropriate connection. YouTube officials apologized for this, stating that their algorithms had misidentified the fire videos and added the information block automatically, and were taking steps to remedy this. [17]
To limit the spread of misinformation and fake news via YouTube, it has rolled out a comprehensive policy regarding how it plans to deal with technically manipulated videos. [18]
On April 18, 2023, YouTube revealed its changes in handling content associated with eating disorders. This social media platform's Community Guidelines now prohibit content that could encourage emulation from at-risk users. This content includes behavior that shows severe calorie tracking and purging after eating. However, videos featuring positive behavior such as in the context of recovery will be permitted on the platform under two conditions—the user must have a registered (logged-in) account and must be older than 18. This policy was created in collaboration with nonprofit organizations as well as the National Eating Disorder Association. Garth Graham, YouTube's Global Head of Healthcare revealed in an interview with CNN that this policy change was geared at ensuring that this video-sharing platform provides an avenue for "community recovery and resources" while ensuring continued viewer protection. [19]
YouTube contracts companies to hire content moderators, who view content flagged as potentially violating YouTube's content policies and determines if they should be removed. In September 2020, a class-action suit was filed by a former content moderator who reported developing post-traumatic stress disorder (PTSD) after an 18-month period on the job. The former content moderator said that she was regularly made to exceed YouTube's stated limit of four hours per day of viewing graphic content. The lawsuit alleges that YouTube's contractors gave little to no training or support for its moderators' mental health, made prospective employees sign NDAs before showing them any examples of content they would see while reviewing, and censored all mention of trauma from its internal forums. It also purports that requests for extremely graphic content to be blurred, reduced in size or made monochrome, per recommendations from the National Center for Missing and Exploited Children, were rejected by YouTube as not a high priority for the company. [20] [21] [22]
Five leading content creators whose channels were based on LGBTQ+ materials filed a federal lawsuit against YouTube in August 2019, alleging that YouTube's algorithms divert discovery away from their channels, impacting their revenue. The plaintiffs claimed that the algorithms discourage content with words like "lesbian" or "gay", which would be predominant in their channels' content, and because of YouTube's near-monopolization of online video services, they are abusing that position. [23] In early 2021 the lawsuit was dismissed based on the plaintiffs inability to prove YouTube acted on behalf of the government and because of section 230. [24]
In June 2022, Media Matters, a media watchdog group, reported that homophobic and transphobic content calling LGBT people "predators" and "groomers" was becoming more common on YouTube. [25] The report also referred to common accusations in YouTube videos that LGBT people are mentally ill. [25] The report stated the content appeared to be in violation of YouTube's hate speech policy. [25]
In late 2020, animal welfare charity Lady Freethinker identified 2,053 videos on YouTube in which they stated animals were "deliberately harmed for entertainment or were shown to be under severe psychological distress, physical pain or dead." [26]
In 2021, Lady Freethinker filed a lawsuit accusing YouTube of a breach of contract in allowing a large number of videos on its site showing animal abuse and failing to remove them when notified. YouTube responded by stating that they had "expanded its policy on animal abuse videos" in 2021, and since the introduction of the new policy "removed hundreds of thousands of videos and terminated thousands of channels for violations." [27]
In 2022, Google defeated the Lady Freethinker lawsuit, with a judge ruling that YouTube was protected by Section 230 of the Communications Decency Act, that shields internet platforms from lawsuits based on content posted by their users. [28]
In 2023, YouTube stated that animal abuse "has no place on their platforms, and they are working to remove content (of that nature)". [29] [30] [31] [32] [33] [34]
YouTube has been criticized for using an algorithm that gives great prominence to videos that promote conspiracy theories, falsehoods and incendiary fringe discourse. [35] [36] [37] According to an investigation by The Wall Street Journal, "YouTube's recommendations often lead users to channels that feature conspiracy theories, partisan viewpoints and misleading videos, even when those users haven't shown interest in such content. When users show a political bias in what they choose to view, YouTube typically recommends videos that echo those biases, often with more-extreme viewpoints." [35] [38] When users search for political or scientific terms, YouTube's search algorithms often give prominence to hoaxes and conspiracy theories. [37] [39] After YouTube drew controversy for giving top billing to videos promoting falsehoods and conspiracy when people made breaking-news queries during the 2017 Las Vegas shooting, YouTube changed its algorithm to give greater prominence to mainstream media sources. [35] [40] [41] [42] In 2018, it was reported that YouTube was again promoting fringe content about breaking news, giving great prominence to conspiracy videos about Anthony Bourdain's death. [43]
In 2017, it was revealed that advertisements were being placed on extremist videos, including videos by rape apologists, anti-Semites, and hate preachers who received ad payouts. [44] After firms started to stop advertising on YouTube in the wake of this reporting, YouTube apologized and said that it would give firms greater control over where ads got placed. [44]
Alex Jones, known for far-right conspiracy theories, had built a massive audience on YouTube. [45] YouTube drew criticism in 2018 when it removed a video from Media Matters compiling offensive statements made by Jones, stating that it violated its policies on "harassment and bullying". [46] On August 6, 2018, however, YouTube removed Alex Jones' YouTube page following a content violation. [47]
University of North Carolina professor Zeynep Tufekci has referred to YouTube as "The Great Radicalizer", saying "YouTube may be one of the most powerful radicalizing instruments of the 21st century." [48] Jonathan Albright of the Tow Center for Digital Journalism at Columbia University described YouTube as a "conspiracy ecosystem". [37] [49]
In January 2019, YouTube said that it had introduced a new policy starting in the United States intended to stop recommending videos containing "content that could misinform users in harmful ways." YouTube gave flat earth theories, miracle cures, and 9/11 truther-isms as examples. [50] Efforts within YouTube engineering to stop recommending borderline extremist videos falling just short of forbidden hate speech, and track their popularity were originally rejected because they could interfere with viewer engagement. [51]
In January 2019, the site announced it would be implementing measures directed towards "raising authoritative content and reducing borderline content and harmful misinformation." [52] That June, YouTube announced it would be banning Holocaust denial and neo-Nazi content. [52] YouTube has blocked the neo-Nazi propaganda film Europa: The Last Battle from being uploaded. [53]
Multiple research studies have investigated cases of misinformation in YouTube. In a July 2019 study based on ten YouTube searches using the Tor Browser related to climate and climate change, the majority of videos were videos that communicated views contrary to the scientific consensus on climate change. [54] A May 2023 study found that YouTube was monetizing and profiting from videos that included misinformation about climate change. [55] A 2019 BBC investigation of YouTube searches in ten different languages found that YouTube's algorithm promoted health misinformation, including fake cancer cures. [56] In Brazil, YouTube has been linked to pushing pseudoscientific misinformation on health matters, as well as elevated far-right fringe discourse and conspiracy theories. [57] In the Philippines, numerous channels disseminated misinformation related to the 2022 Philippine elections. [58] Additionally, research on the dissemination of Flat Earth beliefs in social media, has shown that networks of YouTube channels form an echo chamber that polarizes audiences by appearing to confirm preexisting beliefs. [59]
Before 2019, YouTube took steps to remove specific videos or channels related to supremacist content that had violated its acceptable use policies but otherwise did not have site-wide policies against hate speech. [60]
In the wake of the March 2019 Christchurch mosque attacks, YouTube and other sites like Facebook and Twitter that allowed user-submitted content drew criticism for doing little to moderate and control the spread of hate speech, which was considered to be a factor in the rationale for the attacks. [61] [62] These platforms were pressured to remove such content, but in an interview with The New York Times , YouTube's chief product officer Neal Mohan said that unlike content such as ISIS videos which take a particular format and thus easy to detect through computer-aided algorithms, general hate speech was more difficult to recognize and handle, and thus could not readily take action to remove without human interaction. [63]
In May 2019, YouTube joined an initiative led by France and New Zealand with other countries and tech companies to develop tools to be used to block online hate speech and to develop regulations, to be implemented at the national level, to be levied against technology firms that failed to take steps to remove such speech, though the United States declined to participate. [64] [65] Subsequently, on June 5, 2019, YouTube announced a major change to its terms of service, "specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status." YouTube identified specific examples of such videos as those that "promote or glorify Nazi ideology, which is inherently discriminatory". YouTube further stated it would "remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place." [60] [66]
In August 2019, the channel of the white nationalist website VDARE was banned. The ban was later reversed. [67] The channel was permanently banned in August 2020 for violating YouTube's policies against hate speech. [68]
In September 2018, YouTube limited some videos by Red Ice, a white supremacist multimedia company, after it posted a video claiming that white women were being "pushed" into interracial relationships. [69] In October 2019, YouTube banned Red Ice's main channel for hate speech violations. The channel had about 330,000 subscribers. Lana Lokteff and Red Ice promoted a backup channel in an attempt to circumvent the ban. [70] [71] A week later, the backup channel was also removed by YouTube. [72] [73]
In June 2020, YouTube was criticized for allowing white supremacist content on its platform for years after it announced it would be pledging $1 million to fight racial injustice. [74] Later that month, it banned several channels associated with white supremacy, including those of Stefan Molyneux, David Duke, and Richard B. Spencer, asserting these channels violated their policies on hate speech. The ban occurred the same day that Reddit announced the ban on several hate speech sub-forums including r/The Donald. [75]
Following the dissemination via YouTube of misinformation related to the COVID-19 pandemic that 5G communications technology was responsible for the spread of coronavirus disease 2019 which led to multiple 5G towers in the United Kingdom being attacked by arsonists, YouTube removed all such videos linking 5G and the coronavirus in this manner. [76]
In September 2021, YouTube extended this policy to cover videos disseminating misinformation related to any vaccine, including those long approved against measles or Hepatitis B, that had received approval from local health authorities or the World Health Organization. [77] [78] The platform proceeded to remove the accounts of anti-vaccine campaigners such as Robert F. Kennedy Jr. and Joseph Mercola. [78]
Google and YouTube implemented policies in October 2021 to deny monetization or revenue to advertisers or content creators that promoted climate change denial, which "includes content referring to climate change as a hoax or a scam, claims denying that long-term trends show the global climate is warming, and claims denying that greenhouse gas emissions or human activity contribute to climate change." [79] In January 2024, the Center for Countering Digital Hate reported that climate change deniers were instead pushing other forms of climate change denial that have not yet been banned by YouTube, including false claims that global warming is "beneficial or harmless", and which undermined climate solutions and climate science. [80] [81]
In July 2022, YouTube announced policies to combat misinformation surrounding abortion, such as videos with instructions to perform abortion methods that are considered unsafe and videos that contain misinformation about the safety of abortion. [82]
YouTube has extended the moderation of misinformation to non-medical areas. In the weeks following the 2020 United States presidential election, the site added policies to remove or label videos promoting election fraud claims; [83] [84] however, it reversed this policy in June 2023, citing that the reversal was necessary to "openly debate political ideas, even those that are controversial or based on disproven assumptions". [85] [86]
In the wake of the 2024 United States presidential election, YouTube reported that it had been working to remove content that promoted election interference, misled voters, or encouraged political violence. The platform also vowed to remove election misinformation generated by artificial intelligence. [87]
Leading into 2017, there was a significant increase in the number of videos related to children, coupled between the popularity of parents vlogging their family's activities, and previous content creators moving away from content that often was criticized or demonetized into family-friendly material. In 2017, YouTube reported that time watching family vloggers had increased by 90%. [88] [89] However, with the increase in videos featuring children, the site began to face several controversies related to child safety. During Q2 2017, the owners of popular channel FamilyOFive, which featured themselves playing "pranks" on their children, were accused of child abuse. Their videos were eventually deleted, and two of their children were removed from their custody. [90] [91] [92] [93] A similar case happened in 2019 when the owner of the channel Fantastic Adventures was accused of abusing her adopted children. Her videos would later be deleted. [94]
Later that year, YouTube came under criticism for showing inappropriate videos targeted at children and often featuring popular characters in violent, sexual or otherwise disturbing situations, many of which appeared on YouTube Kids and attracted millions of views. The term "Elsagate" was coined on the Internet and then used by various news outlets to refer to this controversy. [95] [96] [97] [98] On November 11, 2017, YouTube announced it was strengthening site security to protect children from unsuitable content. Later that month, the company started to mass delete videos and channels that made improper use of family-friendly characters. As part of a broader concern regarding child safety on YouTube, the wave of deletions also targeted channels that showed children taking part in inappropriate or dangerous activities under the guidance of adults. Most notably, the company removed Toy Freaks , a channel with over 8.5 million subscribers, that featured a father and his two daughters in odd and upsetting situations. [99] [100] [101] [102] [103] According to analytics specialist SocialBlade, it earned up to $11.2 million annually prior to its deletion in November 2017. [104]
Even for content that appears to be aimed at children and appears to contain only child-friendly content, YouTube's system allows for anonymity of who uploads these videos. These questions have been raised in the past, as YouTube has had to remove channels with children's content which, after becoming popular, then suddenly include inappropriate content masked as children's content. [105] Alternative, some of the most-watched children's programming on YouTube comes from channels that have no identifiable owners, raising concerns of intent and purpose. One channel that had been of concern was "Cocomelon" which provided numerous mass-produced animated videos aimed at children. Up through 2019, it had drawn up to US$10 million a month in ad revenue and was one of the largest kid-friendly channels on YouTube before 2020. Ownership of Cocomelon was unclear outside of its ties to "Treasure Studio", itself an unknown entity, raising questions as to the channel's purpose, [105] [106] [107] but Bloomberg News had been able to confirm and interview the small team of American owners in February 2020 regarding "Cocomelon", who stated their goal for the channel was to simply entertain children, wanting to keep to themselves to avoid attention from outside investors. [108] The anonymity of such channel raise concerns because of the lack of knowledge of what purpose they are trying to serve. [109] The difficulty to identify who operates these channels "adds to the lack of accountability", according to Josh Golin of the Campaign for a Commercial-Free Childhood, and educational consultant Renée Chernow-O'Leary found the videos were designed to entertain with no intent to educate, all leading to critics and parents to be concerned for their children becoming too enraptured by the content from these channels. [105] Content creators that earnestly make child-friendly videos have found it difficult to compete with larger channels, unable to produce content at the same rate as them, and lacking the same means of being promoted through YouTube's recommendation algorithms that the larger animated channel networks have shared. [109]
In January 2019, YouTube officially banned videos containing "challenges that encourage acts that have an inherent risk of severe physical harm" (such as the Tide Pod Challenge) and videos featuring pranks that "make victims believe they're in physical danger" or cause emotional distress in children. [110]
Also in November 2017, it was revealed in the media that many videos featuring children—often uploaded by the minors themselves, and showing innocent content such as the children playing with toys or performing gymnastics—were attracting comments from pedophiles [111] [112] with predators finding the videos through private YouTube playlists or typing in certain keywords in Russian. [112] Other child-centric videos originally uploaded to YouTube began propagating on the dark web, and uploaded or embedded onto forums known to be used by pedophiles. [113]
As a result of the controversy, which added to the concern about "Elsagate", several major advertisers whose ads had been running against such videos froze spending on YouTube. [98] [114] In December 2018, The Times found more than 100 grooming cases in which children were manipulated into sexually implicit behavior (such as taking off clothes, adopting overtly sexual poses and touching other children inappropriately) by strangers. [115] After a reporter flagged the videos in question, half of them were removed, and the rest were removed after The Times contacted YouTube's PR department. [115]
In February 2019, YouTube vlogger Matt Watson identified a "wormhole" that would cause the YouTube recommendation algorithm to draw users into this type of video content, and make all of that user's recommended content feature only these types of videos. [116] Most of these videos had comments from sexual predators commenting with timestamps of when the children were shown in compromising positions or otherwise making indecent remarks. In some cases, other users had re-uploaded the video in unlisted form but with incoming links from other videos, and then monetized these, propagating this network. [117] In the wake of the controversy, the service reported that they had deleted over 400 channels and tens of millions of comments, and reported the offending users to law enforcement and the National Center for Missing and Exploited Children. A spokesperson explained that "any content—including comments—that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube. There's more to be done, and we continue to work to improve and catch abuse more quickly." [118] [119] Despite these measures, AT&T, Disney, Dr. Oetker, Epic Games, and Nestlé all pulled their advertising from YouTube. [117] [120]
Subsequently, YouTube began to demonetize and block advertising on the types of videos that have drawn these predatory comments. The service explained that this was a temporary measure while they explore other methods to eliminate the problem. [121] YouTube also began to flag channels that predominantly feature children, and preemptively disable their comments sections. "Trusted partners" can request that comments be re-enabled, but the channel will then become responsible for moderating comments. These actions mainly target videos of toddlers, but videos of older children and teenagers may be protected as well if they contain actions that can be interpreted as sexual, such as gymnastics. YouTube stated it was also working on a better system to remove comments on other channels that matched the style of child predators. [122] [123]
A related attempt to algorithmically flag videos containing references to the string "CP" (an abbreviation of child pornography) resulted in some prominent false positives involving unrelated topics using the same abbreviation, including videos related to the mobile video game Pokémon Go (which uses "CP" as an abbreviation of the statistic "Combat Power"), and Club Penguin . YouTube apologized for the errors and reinstated the affected videos. [124] Separately, online trolls have attempted to have videos flagged for takedown or removal by commenting with statements similar to what the child predators had said; this activity became an issue during the PewDiePie vs T-Series rivalry in early 2019. YouTube stated they do not take action on any video with these comments but those that they have flagged that are likely to draw child predator activity. [125]
In June 2019, The New York Times cited researchers who found that users who watched erotic videos could be recommended seemingly innocuous videos of children. [126] As a result, Senator Josh Hawley stated plans to introduce federal legislation that would ban YouTube and other video sharing sites from including videos that predominantly feature minors as "recommended" videos, excluding those that were "professionally produced", such as videos of televised talent shows. [127] YouTube has suggested potential plans to remove all videos featuring children from the main YouTube site and transferring them to the YouTube Kids site where they would have stronger controls over the recommendation system, as well as other major changes on the main YouTube site to the recommended feature and auto-play system. [128]
An August 2022 report by the Center for Countering Digital Hate, a British think tank, found that harassment against women was flourishing on YouTube. It noted that channels espousing a similar ideology to that of men's rights influencer Andrew Tate were using YouTube to grow their audience, despite Tate being banned from the platform. [129] In his 2022 book Like, Comment, Subscribe: Inside YouTube's Chaotic Rise to World Domination, Bloomberg reporter Mark Bergen said that many female content creators were dealing with harassment, bullying, and stalking. [129]
On websites that allow users to create content, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves.
YouTube is an American social media and online video sharing platform owned by Google. YouTube was founded on February 14, 2005, by Steve Chen, Chad Hurley, and Jawed Karim, three former employees of PayPal. Headquartered in San Bruno, California, it is the second-most-visited website in the world, after Google Search. In January 2024, YouTube had more than 2.7 billion monthly active users, who collectively watched more than one billion hours of videos every day. As of May 2019, videos were being uploaded to the platform at a rate of more than 500 hours of content per minute, and as of 2023, there were approximately 14 billion videos in total.
Google and its subsidiary companies, such as YouTube, have removed or omitted information from its services in order to comply with company policies, legal demands, and government censorship laws.
The Center for Countering Digital Hate (CCDH), formerly Brixton Endeavors, is a British-American not-for-profit NGO company with offices in London and Washington, D.C. with the stated purpose of stopping the spread of online hate speech and disinformation. It campaigns to deplatform people that it believes promote hate or misinformation, and campaigns to restrict media organisations such as The Daily Wire from advertising. CCDH is a member of the Stop Hate For Profit coalition.
Criticism of Google includes concern for tax avoidance, misuse and manipulation of search results, its use of others' intellectual property, concerns that its compilation of data may violate people's privacy and collaboration with the US military on Google Earth to spy on users, censorship of search results and content, its cooperation with the Israeli military on Project Nimbus targeting Palestinians and the energy consumption of its servers as well as concerns over traditional business issues such as monopoly, restraint of trade, antitrust, patent infringement, indexing and presenting false information and propaganda in search results, and being an "Ideological Echo Chamber".
The American online video sharing and social media platform YouTube has had social impact in many fields, with some individual videos of the site having directly shaped world events. It is the world's largest video hosting website and second most visited website according to both Alexa Internet and Similarweb, and used by 81% of U.S. adults.
Instagram is an American photo and video sharing social networking service owned by Meta Platforms. It allows users to upload media that can be edited with filters, be organized by hashtags, and be associated with a location via geographical tagging. Posts can be shared publicly or with preapproved followers. Users can browse other users' content by tags and locations, view trending content, like photos, and follow other users to add their content to a personal feed. A Meta-operated image-centric social media platform, it is available on iOS, Android, Windows 10, and the web. Users can take photos and edit them using built-in filters and other tools, then share them on other social media platforms like Facebook. It supports 32 languages including English, Hindi, Spanish, French, Korean, and Japanese.
On the social news site Reddit, some communities are devoted to explicit, violent, propagandist, or hateful material. These subreddits have been the topic of controversy, at times receiving significant media coverage. Journalists, attorneys, media researchers, and others have commented that such communities shape and promote biased views of international politics, the veracity of medical evidence, misogynistic rhetoric, and other disruptive concepts.
r/The_Donald was a subreddit where participants created discussions and internet memes in support of U.S. president Donald Trump. Initially created in June 2015 following the announcement of Trump's presidential campaign, the community grew to over 790,000 subscribers who described themselves as "Patriots". The community was banned in June 2020 for violating Reddit rules on harassment and targeting. It was ranked as one of the most active communities on Reddit in the late 2010s.
Facebook has been involved in multiple controversies involving censorship of content, removing or omitting information from its services in order to comply with company policies, legal demands, and government censorship laws.
Elsagate is a controversy surrounding videos on YouTube and YouTube Kids that were categorized as "child-friendly", but contained themes inappropriate for children. These videos often featured fictional characters from family-oriented media, sometimes via crossovers, used without legal permission. The controversy also included channels that focused on real-life children, such as Toy Freaks, that raised concern about possible child abuse.
BitChute is an alt-tech video hosting service launched by Ray Vahey in January 2017. It describes itself as offering freedom of speech, while the service is known for hosting far-right individuals, conspiracy theorists, and hate speech. Some creators who use BitChute have been banned from YouTube; some others crosspost content to both platforms or post more extreme content only to BitChute. Before its deprecation, BitChute claimed to use peer-to-peer WebTorrent technology for video distribution, though this was disputed.
Deplatforming, also called no-platforming, is a form of Internet censorship of an individual or group by preventing them from posting on the platforms they use to share their information/ideas. This typically involves suspension, outright bans, or reducing spread.
TikTok, whose mainland Chinese and Hong Kong counterpart is Douyin, is a short-form video hosting service owned by Chinese internet company ByteDance. It hosts user-submitted videos, which may range in duration from three seconds to 60 minutes. It can be accessed with a smart phone app or the web.
YouTube may suspend accounts, temporarily or permanently, from their social networking service. Suspensions of high-profile individuals from YouTube are unusual and when they occur, often attract attention in the media.
DLive is an American video live streaming service which was founded in 2017. It was purchased by BitTorrent in 2019. Due to the site's lax enforcement of prohibited content guidelines, DLive has become a popular alternative to YouTube and Twitch among white supremacists, conspiracy theorists, neo-Nazis, other fascists, and extremists. The site is also used by gamers as an alternative to Twitch.
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.
Since its founding in 2005, the American video-sharing website YouTube has been faced with a growing number of privacy issues, including allegations that it allows users to upload unauthorized copyrighted material and allows personal information from young children to be collected without their parents' consent.
Facebook and Meta Platforms have been criticized for their management of various content on posts, photos and entire groups and profiles. This includes but is not limited to allowing violent content, including content related to war crimes, and not limiting the spread of fake news and COVID-19 misinformation on their platform, as well as allowing incitement of violence against multiple groups.
The alt-right pipeline is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups. This process is most commonly associated with and has been documented on the video platform YouTube, and is largely faceted by the method in which algorithms on various social media platforms function through the process recommending content that is similar to what users engage with, but can quickly lead users down rabbit-holes. The effects of YouTube's algorithmic bias in radicalizing users has been replicated by one study, although two other studies found little or no evidence of a radicalization process.
In August of this year, YouTube announced that it would no longer allow creators to monetize videos which "made inappropriate use of family-friendly characters." Today it's taking another step to try to police this genre.