Algorithmic radicalization

Last updated

Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation. [1] [2] [3] [4]

Contents

Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels. [5] [6] To what extent recommender algorithms are actually responsible for radicalization remains controversial; studies have found contradictory results as to whether algorithms have promoted extremist content.

Social media echo chambers and filter bubbles

Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling, known as a filter bubble. [7] An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system. [8] Echo chambers spread information without any opposing beliefs and can possibly lead to confirmation bias. According to group polarization theory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions. [9] According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates." [10]

By site

Facebook

Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content. [11] Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study. [12] It is also relatively common for Facebook to assign political labels to their users. In recent years,[ when? ] Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known as The Facebook Files has revealed that their AI system prioritizes user engagement over everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle. [13]

In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral", [14] [15] concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity. [16] As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm." [14] According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories." [17]

YouTube

YouTube has been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content. [18] According to a new study, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc. [18]

YouTube has been identified as an influential platform for spreading radicalized content. Al-Qaeda and similar extremist groups have been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by the American Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process". [19] The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February 2023, in the case of Gonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommending ISIS videos to users. Section 230 is known to generally protect online platforms from civil liability for the content posted by its users. [20]

Multiple studies have found little to no evidence to suggest that YouTube's algorithms direct attention towards far-right content to those not already engaged with it. [21] [22] [23]

TikTok

TikTok is an app that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app. [24] Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm. [25]

Various extremist groups, including jihadist organizations, have utilized TikTok to disseminate propaganda, recruit followers, and incite violence. The platform's algorithm, which recommends content based on user engagement, can expose users to extremist content that aligns with their interests or interactions. [26]

As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April – June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation. [27]

Studies have noted instances where individuals were radicalized through content encountered on TikTok. For example, in early 2023, Austrian authorities thwarted a plot against an LGBTQ+ pride parade that involved two teenagers and a 20-year-old who were inspired by jihadist content on TikTok. The youngest suspect, 14 years old, had been exposed to videos created by Islamist influencers glorifying jihad. These videos led him to further engagement with similar content, eventually resulting in his involvement in planning an attack. [26]

Another case involved the arrest of several teenagers in Vienna, Austria, in 2024, who were planning to carry out a terrorist attack at a Taylor Swift concert. The investigation revealed that some of the suspects had been radicalized online, with TikTok being one of the platforms used to disseminate extremist content that influenced their beliefs and actions. [26]

Self-radicalization

An infographic from the United States Department of Homeland Security's "If You See Something, Say Something" campaign. The campaign is a national initiative to raise awareness to homegrown terrorism and terrorism-related crime. DHS seesomethingsaysomething indicatorinfographic.pdf
An infographic from the United States Department of Homeland Security's "If You See Something, Say Something" campaign. The campaign is a national initiative to raise awareness to homegrown terrorism and terrorism-related crime.

The U.S. Department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization". [28] Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization. [29] Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists. [30] These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs. [31]

References in media

The Social Dilemma

The Social Dilemma is a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to prove psychological manipulation on social media, therefore leading to political manipulation. In the film, Ben falls deeper into a social media addiction as the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video.

Proposed solutions

Weakening Section 230 protections

In the Communications Decency Act, Section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". [32] Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user. [32] However, critics argue that this approach reduces a company's incentive to remove harmful content or misinformation, and this loophole has allowed social media companies to maximize profits through pushing radical content without legal risks. [33] This claim has itself been criticized by proponents of Section 230, as prior to its passing, courts had ruled in Stratton Oakmont, Inc. v. Prodigy Services Co. that moderation in any capacity introduces a liability to content providers as "publishers" of the content they chose to leave up. [34]

Lawmakers have drafted legislation that would weaken or remove Section 230 protections over algorithmic content. House Democrats Anna Eshoo, Frank Pallone Jr., Mike Doyle, and Jan Schakowsky introduced the "Justice Against Malicious Algorithms Act" in October 2021 as H.R. 5596. The bill died in committee, [35] but it would have removed Section 230 protections for service providers related to personalized recommendation algorithms that present content to users if those algorithms knowingly or recklessly deliver content that contributes to physical or severe emotional injury. [36]

See also

Related Research Articles

<span class="mw-page-title-main">Content moderation</span> System to sort undesirable contributions

On websites that allow users to create content, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves.

<span class="mw-page-title-main">YouTube</span> Video-sharing platform owned by Google

YouTube is an American online video sharing platform owned by Google. YouTube was founded on February 14, 2005, by Steve Chen, Chad Hurley, and Jawed Karim, three former employees of PayPal. Headquartered in San Bruno, California, United States, it is the second-most visited website in the world, after Google Search. In January 2024, YouTube had more than 2.7 billion monthly active users, who collectively watched more than one billion hours of videos every day. As of May 2019, videos were being uploaded to the platform at a rate of more than 500 hours of content per minute, and as of 2021, there were approximately 14 billion videos in total.

The Center for Countering Digital Hate (CCDH), formerly Brixton Endeavors, is a British-American not-for-profit NGO company with offices in London and Washington, D.C. with the stated purpose of stopping the spread of online hate speech and disinformation. It campaigns to deplatform people that it believes promote hate or misinformation, and campaigns to restrict media organisations such as The Daily Wire from advertising. CCDH is a member of the Stop Hate For Profit coalition.

<span class="mw-page-title-main">Institute for Strategic Dialogue</span> Think tank

The Institute for Strategic Dialogue (ISD) is a political advocacy organization founded in 2006 by Sasha Havlicek and George Weidenfeld and headquartered in London, United Kingdom.

<span class="mw-page-title-main">Echo chamber (media)</span> Situation that reinforces beliefs by repetition inside a closed system

In news media and social media, an echo chamber is an environment or ecosystem in which participants encounter beliefs that amplify or reinforce their preexisting beliefs by communication and repetition inside a closed system and insulated from rebuttal. An echo chamber circulates existing views without encountering opposing views, potentially resulting in confirmation bias. Echo chambers may increase social and political polarization and extremism. On social media, it is thought that echo chambers limit exposure to diverse perspectives, and favor and reinforce presupposed narratives and ideologies.

<span class="mw-page-title-main">Section 230</span> US federal law on website liability

In the United States, Section 230 is a section of the Communications Act of 1934 that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by its users. At its core, Section 230(c)(1) provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

<span class="mw-page-title-main">Like button</span> Communication software feature used to express support

A like button, like option, or recommend button is a feature in communication software such as social networking services, Internet forums, news websites and blogs where the user can express that they like, enjoy or support certain content. Internet services that feature like buttons usually display the number of users who liked the content, and may show a full or partial list of them. This is a quantitative alternative to other methods of expressing reaction to content, like writing a reply text. Some websites also include a dislike button, so the user can either vote in favor, against or neutrally. Other websites include more complex web content voting systems; for example, five stars or reaction buttons to show a wider range of emotion to the content.

Viral phenomena or viral sensations are objects or patterns that are able to replicate themselves or convert other objects into copies of themselves when these objects are exposed to them. Analogous to the way in which viruses propagate, the term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. This concept has become a common way to describe how thoughts, information, and trends move into and through a human population.

<span class="mw-page-title-main">Counter Extremism Project</span> Nonprofit NGO that combats extremist groups

The Counter Extremism Project (CEP) is a non-profit non-governmental organization that combats extremist groups "by pressuring financial support networks, countering the narrative of extremists and their online recruitment, and advocating for strong laws, policies and regulations".

Online youth radicalization is the action in which a young individual or a group of people come to adopt increasingly extreme political, social, or religious ideals and aspirations that reject, or undermine the status quo or undermine contemporary ideas and expressions of a state, which they may or may not reside in. Online youth radicalization can be both violent or non-violent.

BitChute is an alt-tech video hosting service launched by Ray Vahey in January 2017. It describes itself as offering freedom of speech, while the service is known for hosting far-right individuals, conspiracy theorists, and hate speech. Some creators who use BitChute have been banned from YouTube; some others crosspost content to both platforms or post more extreme content only to BitChute. Before its deprecation, BitChute claimed to use peer-to-peer WebTorrent technology for video distribution, though this was disputed.

Online hate speech is a type of speech that takes place online with the purpose of attacking a person or a group based on their race, religion, ethnic origin, sexual orientation, disability, and/or gender. Online hate speech is not easily defined, but can be recognized by the degrading or dehumanizing function it serves.

<span class="mw-page-title-main">Deplatforming</span> Administrative or political action to deny access to a platform to express opinions

Deplatforming, also called no-platforming, is a form of Internet censorship of an individual or group by preventing them from posting on the platforms they use to share their information/ideas. This typically involves suspension, outright bans, or reducing spread.

<span class="mw-page-title-main">TikTok</span> Video-focused social media platform

TikTok, whose mainland Chinese and Hong Kong counterpart is Douyin, is a short-form video hosting service owned by Chinese internet company ByteDance. It hosts user-submitted videos, which can range in duration from three seconds to 60 minutes. It can be accessed with a smart phone app.

<span class="mw-page-title-main">Alt-right pipeline</span> Online radicalization process

The alt-right pipeline is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups. This process is most commonly associated with and has been documented on the video platform YouTube, and is largely faceted by the method in which algorithms on various social media platforms function through the process recommending content that is similar to what users engage with, but can quickly lead users down rabbit-holes. The effects of YouTube's algorithmic bias in radicalizing users has been replicated by one study, although two other studies found little or no evidence of a radicalization process.

In internet slang, rage-baiting is the manipulative tactic of eliciting outrage with the goal of increasing internet traffic, online engagement, revenue and support. Rage baiting or farming can be used as a tool to increase engagement, attract subscribers, followers, and supporters, which can be financially lucrative. Rage baiting and rage farming manipulates users to respond in kind to offensive, inflammatory headlines, memes, tropes, or comments.

Antisemitism on social media can manifest in various forms such as emojis, GIFs, memes, comments, and reactions to content. Studies have categorized antisemitic discourse into different types: hate speech, calls for violence, dehumanization, conspiracy theories and Holocaust denial.

Far-right political groups use mainstream social media platforms for communication, propaganda, and mobilization. These platforms include Facebook, Instagram, TikTok, X and YouTube. By leveraging viral trends, entertaining content, and direct interaction, far-right groups aim to spread their political messages, recruit followers, and foster a sense of community. Such activities are part of broader political processes and activities that involve the organization and spread of political values and ideologies.

<i>Anderson v. TikTok</i>

Anderson v. TikTok, 2:22-cv-01849,, is a decision by the United States Court of Appeals for the Third Circuit in which the court held that Section 230 of the Communications Decency Act (CDA), 47 U.S.C. § 230, does not bar claims against TikTok, a video-sharing social media platform, regarding TikTok's recommendations to users via its algorithm.

YouTube moderation are a set of community guidelines aimed to reduce abuse of the site's features. The uploading of videos containing defamation, pornography, and material encouraging criminal conduct is forbidden by YouTube's "Community Guidelines". Generally prohibited material includes sexually explicit content, videos of animal abuse, shock videos, content uploaded without the copyright holder's consent, hate speech, spam, and predatory behavior. YouTube relies on its users to flag the content of videos as inappropriate, and a YouTube employee will view a flagged video to determine whether it violates the site's guidelines. Despite the guidelines, YouTube has faced criticism over aspects of its operations, its recommendation algorithms perpetuating videos that promote conspiracy theories and falsehoods, hosting videos ostensibly targeting children but containing violent or sexually suggestive content involving popular characters, videos of minors attracting pedophilic activities in their comment sections, and fluctuating policies on the types of content that is eligible to be monetized with advertising.

References

  1. "What is a Social Media Echo Chamber? | Stan Richards School of Advertising". advertising.utexas.edu. November 18, 2020. Retrieved November 2, 2022.
  2. "The Websites Sustaining Britain's Far-Right Influencers". bellingcat. February 24, 2021. Retrieved March 10, 2021.
  3. Camargo, Chico Q. (January 21, 2020). "YouTube's algorithms might radicalise people – but the real problem is we've no idea how they work". The Conversation. Retrieved March 10, 2021.
  4. E&T editorial staff (May 27, 2020). "Facebook did not act on own evidence of algorithm-driven extremism". eandt.theiet.org. Retrieved March 10, 2021.
  5. "How Can Social Media Firms Tackle Hate Speech?". Knowledge at Wharton. Retrieved November 22, 2022.
  6. "Internet Association - We Are The Voice Of The Internet Economy. | Internet Association". December 17, 2021. Archived from the original on December 17, 2021. Retrieved November 22, 2022.
  7. Kaluža, Jernej (July 3, 2022). "Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic for the Democratic Public Sphere?". Javnost - the Public, Journal of the European Institute for Communication and Culture. 29 (3): 267–283. doi:10.1080/13183222.2021.2003052. ISSN   1318-3222.
  8. "What is a Social Media Echo Chamber? | Stan Richards School of Advertising". advertising.utexas.edu. November 18, 2020. Retrieved April 12, 2023.
  9. Cinelli, Matteo; De Francisci Morales, Gianmarco; Galeazzi, Alessandro; Quattrociocchi, Walter; Starnini, Michele (March 2, 2021). "The echo chamber effect on social media". Proceedings of the National Academy of Sciences of the United States of America. 118 (9): –2023301118. Bibcode:2021PNAS..11823301C. doi: 10.1073/pnas.2023301118 . ISSN   0027-8424. PMC   7936330 . PMID   33622786.
  10. Cinelli, Matteo; De Francisci Morales, Gianmarco; Starnini, Michele; Galeazzi, Alessandro; Quattrociocchi, Walter (January 14, 2021). "The echo chamber effect on social media". Proceedings of the National Academy of Sciences of the United States of America. 118 (9): e2023301118. Bibcode:2021PNAS..11823301C. doi: 10.1073/pnas.2023301118 . ISSN   0027-8424. PMC   7936330 . PMID   33622786.
  11. Oremus, Will; Alcantara, Chris; Merrill, Jeremy; Galocha, Artur (October 26, 2021). "How Facebook shapes your feed". The Washington Post . Retrieved April 12, 2023.
  12. Atske, Sara (January 16, 2019). "Facebook Algorithms and Personal Data". Pew Research Center: Internet, Science & Tech. Retrieved April 12, 2023.
  13. Korinek, Anton (December 8, 2021). "Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files". Brookings. Retrieved April 12, 2023.
  14. 1 2 "Disinformation, Radicalization, and Algorithmic Amplification: What Steps Can Congress Take?". Just Security. February 7, 2022. Retrieved November 2, 2022.
  15. Isaac, Mike (October 25, 2021). "Facebook Wrestles With the Features It Used to Define Social Networking". The New York Times. ISSN   0362-4331 . Retrieved November 2, 2022.
  16. Little, Olivia (March 26, 2021). "TikTok is prompting users to follow far-right extremist accounts". Media Matters for America. Retrieved November 2, 2022.
  17. "Study: False news spreads faster than the truth". MIT Sloan. Retrieved November 2, 2022.
  18. 1 2 "Hated that video? YouTube's algorithm might push you another just like it". MIT Technology Review. Retrieved April 11, 2023.
  19. Murthy, Dhiraj (May 1, 2021). "Evaluating Platform Accountability: Terrorist Content on YouTube". American Behavioral Scientist. 65 (6): 800–824. doi:10.1177/0002764221989774. S2CID   233449061 via JSTOR.
  20. Root, Damon (April 2023). "Scotus Considers Section 230's Scope". Reason. 54 (11): 8. ISSN   0048-6906.
  21. Ledwich, Mark; Zaitsev, Anna (March 2, 2020). "Algorithmic extremism: Examining YouTube's rabbit hole of radicalization". First Monday. 25 (3). arXiv: 1912.11211 . doi: 10.5210/fm.v25i3.10419 . Retrieved November 8, 2024.
  22. Hosseinmardi, Homa; Ghasemian, Amir; Clauset, Aaron; Mobius, Markus; Rothschild, David M.; Watts, Duncan J. (August 10, 2021). "Examining the consumption of radical content on YouTube". Proceedings of the National Academy of Sciences. 118 (32): e2101967118. doi: 10.1073/pnas.2101967118 . PMC   8364190 . PMID   34341121.
  23. Chen, Annie Y.; Nyhan, Brendan; Reifler, Jason; Robertson, Ronald E.; Wilson, Christo (September 2023). "Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels". Science Advances. 9 (35): eadd8080. doi:10.1126/sciadv.add8080. PMC   10468121 . PMID   37647396.
  24. "TikTok's algorithm leads users from transphobic videos to far-right rabbit holes". Media Matters for America. October 5, 2021. Retrieved November 22, 2022.
  25. Little, Olivia (April 2, 2021). "Seemingly harmless conspiracy theory accounts on TikTok are pushing far-right propaganda and TikTok is prompting users to follow them". Media Matters for America. Retrieved November 22, 2022.
  26. 1 2 3 "TikTok Jihad: Terrorists Leverage Modern Tools to Recruit and Radicalize". The Soufan Center. August 9, 2024. Retrieved August 10, 2024.
  27. "Our continued fight against hate and harassment". Newsroom | TikTok. August 16, 2019. Retrieved November 22, 2022.
  28. "Lone Wolf Terrorism in America | Office of Justice Programs". www.ojp.gov. Retrieved November 2, 2022.
  29. Alfano, Mark; Carter, J. Adam; Cheong, Marc (2018). "Technological Seduction and Self-Radicalization". Journal of the American Philosophical Association. 4 (3): 298–322. doi:10.1017/apa.2018.27. ISSN   2053-4477. S2CID   150119516.
  30. Dubois, Elizabeth; Blank, Grant (May 4, 2018). "The echo chamber is overstated: the moderating effect of political interest and diverse media". Information, Communication & Society. 21 (5): 729–745. doi: 10.1080/1369118X.2018.1428656 . ISSN   1369-118X. S2CID   149369522.
  31. Sunstein, Cass R. (May 13, 2009). Going to Extremes: How Like Minds Unite and Divide. Oxford University Press. ISBN   978-0-19-979314-3.
  32. 1 2 "47 U.S. Code § 230 - Protection for private blocking and screening of offensive material". LII / Legal Information Institute. Retrieved November 2, 2022.
  33. Smith, Michael D.; Alstyne, Marshall Van (August 12, 2021). "It's Time to Update Section 230". Harvard Business Review. ISSN   0017-8012 . Retrieved November 2, 2022.
  34. Masnick, Mike (June 23, 2020). "Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act" . Retrieved April 11, 2024.
  35. "H.R. 5596 (117th): Justice Against Malicious Algorithms Act of 2021". GovTrack . Retrieved April 11, 2024.
  36. Robertson, Adi (October 14, 2021). "Lawmakers want to strip legal protections from the Facebook News Feed". The Verge . Archived from the original on October 14, 2021. Retrieved October 14, 2021.