Rage-baiting

Last updated

In internet slang, rage-baiting (also ragebaiting, rage-farming, or rage-seeding) is the manipulative tactic of eliciting outrage with the goal of increasing internet traffic, online engagement, and revenue, as well as attracting new subscribers, followers, or supporters. [1] [2] [3] This manipulation occurs through offensive or inflammatory headlines, memes, tropes, or comments that provoke users to respond in kind. [4] [5] [6] [7]

Contents

The related terms rage-seeding and rage-farming specifically describe the process by which content creators intentionally sow outrage to harvest (farm) online engagement, thereby amplifying their message. [3] [8] [9] Political actors have employed rage-baiting as a tactic against their opponents, while social media algorithms reward both positive and negative engagement, inadvertently encouraging such behavior. [2]

The broader concept of posting provocative content to encourage user interaction is known as engagement farming. [10] Rage bait was named Oxford University Press's Word of the Year for 2025. [11]

The term rage-farming (or rage-seeding) derives from the metaphor of "farming" rage—planting seeds that cause angry responses to grow. [12] It evolved from clickbait, a term used since c. 1999, which encompasses broader content designed to generate clicks and is not necessarily viewed negatively. [13] [14] The specific term rage bait has been documented since at least 2009 and represents a particularly manipulative form of clickbait that relies on deliberately offensive or inflammatory content. [4] [5] [6] [7] A 2016 article characterized rage-bait as "clickbait's evil twin." [4]

While rage-baiting shares surface similarities with Internet trolling, as both involve posting provocative content to elicit emotional responses, they differ fundamentally in purpose and structure. Trolling typically serves individual amusement or disruption without organized economic or ideological motives, whereas rage-baiting operates as a systematic strategy designed to maximize engagement metrics for financial gain or to advance specific political narratives through coordinated campaigns rather than isolated mischief. [15]

A 2016 Westside Seattle Herald article cited the Urban Dictionary definition: "a post on social media by a news organisation designed expressly to outrage as many people as possible in order to generate interaction." [5] [6] Political scientist Jared Wesley described rage-farming as "rhetoric designed to elicit the rage of opponents" while also attracting and maintaining a like-minded base of supporters. [8] [7] In his influential January 2022 tweet, Citizen Lab researcher John Scott-Railton explained that users are "being rage-farmed" when they respond to inflammatory posts with equally inflammatory quote tweets, as algorithms on Facebook, X, TikTok, Instagram, and YouTube reward such engagement by amplifying the original content. [2]

Research dating to 2012 established that eliciting outrage serves as a powerful tool in both media and political manipulation. [16] [17] A Journal of Politics study found that anger—more than anxiety—increases information-seeking behavior and drives users to click through to content, creating psychological incentives for angry rhetoric in political communication. [16] Rage-bait creators sometimes fabricate "controversial news stories out of thin air," producing what philosopher Harry Frankfurt characterized as bullshit—statements made with indifference to truth, crafted purely for strategic effect. [18] An example is a December 2018 advertisement falsely claiming that two-thirds of people wanted Santa Claus to be female or gender-neutral. [18]

As a form of media manipulation and Internet manipulation, rage-baiting can generate revenue through increased traffic, but also functions as an influence tactic on social media platforms. [13] A November 2016 analysis found that such content exploits targeted audiences' confirmation biases, with Facebook's algorithms creating filter bubbles that distribute inflammatory posts to receptive viewers. [19]

Mechanisms

Rage-baiting exploits well-documented psychological and economic mechanisms. Research has established that moral outrage serves as the primary driver, with analysis of over 563,000 tweets finding each moral-emotional word increases message diffusion by approximately 20%. [20] Negativity bias, which is the tendency to prioritize negative information, makes users particularly susceptible. Experiments have demonstrated that each additional negative word in headlines increases click-through rates by 2.3%. [21] Confirmation bias and echo chambers amplify these effects, as users preferentially engage with content confirming existing beliefs while algorithmic systems create filter bubbles exposing individuals primarily to ideologically aligned viewpoints. [22]

Economically, the attention economy creates powerful financial incentives for rage-baiting. Platform revenue models based on advertising and engagement metrics reward content maximizing user interaction regardless of emotional valence or factual accuracy. [23] Research has quantified that increasing anger-inducing content by just 0.1 generates approximately six additional retweets, directly translating emotional manipulation into economic value. [24] Algorithmic amplification compounds these incentives, with studies showing that platform algorithms systematically boost divisive political content and disproportionately amplify low-credibility sources. [25]

Examples and impact

In politics

A 2006 Time article described how Internet trolls post incendiary comments to provoke arguments on even banal topics, such as stating "NASCAR is about as much a sport as cheerleading" in racing forums or endorsing open borders to Lou Dobbs. [15]

Political scientist Jared Wesley of the University of Alberta stated in 2022 that rage farming was on the rise within politicians to "promoting conspiracy theories and misinformation". As politicians increase rage farming against their political and ideological opponents, they attract more followers online, some of whom may engage in offline violence, including verbal violence and acts of intimidation. Wesley describes how those engaged in rage farming combine half-truths with "blatant lies". [26]

In an Atlantic article on Republican strategy, American writer Molly Jong-Fast described rage farming as "the product of a perfect storm of fuckery, an unholy mélange of algorithms and anxiety". [3]

A November 2018 National Review article decrying social-justice warriors was cited as an example of rage-baiting by Media Matters for America. [27] [17] The Review article was in response to Tweets criticizing the cartoon image used by ABC's Twitter account to advertise A Charlie Brown Thanksgiving on November 21, 2018. [27] Franklin, the Black friend, was sitting alone on one side of Charlie Brown's Thanksgiving dinner table. [27] Several unverified accounts by Twitter users, including one with zero followers, called the image racist. [17] Conservatives were frustrated by these overly sensitive, politically correct "snowflake" liberals and responded in anger. The Media Matters for America article noted the irony: the National Review article, which intended to illustrate how liberals were too easily provoked to anger, actually succeeded in enraging conservatives. [17]

A 2020 review of the conservative Canadian online news magazine The Post Millennial , which was started in 2017, described it as far-right America's most recent rage-baiting outlet. [28]

Social media

Facebook has been "blamed for fanning sectarian hatred, steering users toward extremism and conspiracy theories, and incentivizing politicians to take more divisive stands," according to a 2021 Washington Post report. [29] Despite previous announcements about changes to its News Feed algorithms to reduce clickbait, revelations by Facebook whistleblower Frances Haugen and content from the 2021 Facebook leak, informally referred to as the Facebook Papers, provided evidence of the News Feed algorithm's role in amplifying divisive content. [29]

Investigations following Haugen's revelations demonstrated how algorithms farm outrage for profit by spreading divisiveness, conspiracy theories, and sectarian hatred that can allegedly contribute to real-world violence. [29] A highly criticized example occurred when Facebook, with over 25 million accounts in Myanmar, neglected to police rage-inducing hate speech posts targeting the Rohingya Muslim minority that allegedly facilitated the Rohingya genocide. [30] [31] [32] [9] [33] [34] In 2021, a US$ 173 billion class action lawsuit filed against Meta Platforms Inc on behalf of Rohingya refugees claimed that Facebook's "algorithms amplified hate speech." [30]

In response to complaints about clickbait, Facebook introduced anti-clickbait algorithms in 2014 and 2016 to remove sites that frequently use headlines that "withhold, exaggerate or distort information." [35] The 2016 algorithms were trained to filter phrases frequently used in clickbait headlines, similar to email spam filters. [35] Publishers who continued using clickbait were punished through loss of referral traffic. [35]

Starting in 2017, Facebook engineers changed their ranking algorithm to score emoji reactions five times higher than "likes" because emojis extended user engagement. [36] Facebook's business model depended on keeping and increasing user engagement. [36] One researcher raised concerns that algorithms rewarding "controversial" posts, including those that incited outrage, could inadvertently result in more spam, abuse, and clickbait. [36]

Since 2018, Facebook executives had been warned that their algorithms promoted divisiveness but refused to act. [37] Scott-Railton observed in a 2022 interview that the algorithmic amplification of inflammatory quote tweets in rage farming may have been planned and structural or accidental. [3] Algorithms reward positive and negative engagement, creating what he called a "genuine dilemma for everyone". Algorithms also allow politicians to bypass legacy media fact-checking by giving them access to targeted uncritical audiences receptive to their messaging, even when it contains misinformation. [17]

By 2019, Facebook's data scientists confirmed that posts inciting the angry emoji were "disproportionately likely to include misinformation, toxicity and low-quality news." [36]

The 2020 Netflix docudrama The Social Dilemma analyzed how social media was intentionally designed for profit maximization through Internet manipulation, including spreading conspiracy theories and disinformation and promoting problematic social media use. [38] Topics covered included social media's role in political polarization in the United States, political radicalization, including online youth radicalization, the spread of fake news, and its use as a propaganda tool by political parties and governmental bodies. According to a former Google design ethicist featured in the film, social media networks have three main goals: maintaining and increasing engagement, growth, and advertisement income. [39]

A 2024 Rolling Stone article discussed the rise of "rage-bait" influencers on TikTok who create content designed to provoke anger and generate engagement. Influencers such as Winta Zesu and Louise Melcher produce staged, controversial videos that often go viral across multiple platforms, drawing in viewers who may not realize the content is fabricated. [40]

Facebook outside the United States

A 2021 Washington Post report revealed that Facebook did not adequately police its service outside the United States. [32] The company invested only 16% of its budget fighting misinformation and hate speech in countries outside the United States, such as France, Italy, and India where English is not the primary language. In contrast, it allocated 84% to the United States, which represents only 10% of Facebook's daily users. [9]

Since at least 2019, Facebook employees were aware of how vulnerable countries like India were to "abuse by bad actors and authoritarian regimes" but did nothing to block accounts publishing hate speech and inciting violence. [9] A 2019 434-page report submitted to the Office of the United Nations High Commissioner for Human Rights by the Independent International Fact-Finding Mission on Myanmar investigated social media's role in disseminating hate speech and inciting violence in anti-Muslim riots and the Rohingya genocide. Facebook was mentioned 289 times in the report. [33] Following publication of an earlier version in August, Facebook took the "rare step" of removing accounts representing 12 million followers implicated in the findings. [31]

In October 2021, Haugen testified before a United States Senate committee that Facebook had been inciting ethnic violence in Myanmar, which has over 25 million Facebook users, and in Ethiopia through algorithms that promoted posts inciting or glorifying violence. False claims about Muslims stockpiling weapons were not removed. [32]

The Digital Services Act, a European legislative proposal to strengthen rules on fighting disinformation and harmful content, was submitted by the European Commission to the European Parliament and Council of the European Union partially in response to concerns raised by the Facebook Files and Haugen's testimony. [34] In 2021, law firms Edelson PC and Fields PLLC lodged a US$173 billion class action lawsuit against Meta Platforms Inc. in the United States District Court for the Northern District of California on behalf of Rohingya refugees, claiming Facebook was negligent in not removing inflammatory posts that facilitated the Rohingya genocide. The lawsuit stated that Facebook's "algorithms amplified hate speech." [30]

Following its launch in Myanmar in 2011, Facebook "quickly became ubiquitous." [30] A report commissioned by Facebook led to the company's 2018 admission that it had failed to do "enough to prevent the incitement of violence and hate speech against the [...]Muslim minority in Myanmar." The independent report found that "Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence". [30]

Documented harms

Research has documented significant associations between rage-baiting exposure and adverse outcomes across multiple domains. Mental health studies have found moderate to strong correlations between problematic social media use and depression, anxiety, and stress, with heavy users showing a 70% increase in self-reported depressive symptoms. [41] [42] Field experiments on political polarization demonstrated that exposure to partisan animosity content leads to measurably colder feelings toward political outgroups, with participants showing over 2-point decreases on feeling thermometers after just 10 days. [43] Studies examining institutional trust found that even single exposures to social media criticism of public institutions can significantly undermine credibility, with integrity-based criticisms generating moral outrage that attracts viral engagement. [44]

Rage-baiting has been directly linked to misinformation amplification, with research across multiple platforms establishing that false content systematically evokes more outrage than accurate information. Analysis of over 126,000 news stories found that falsehoods are 70% more likely to be retweeted than truth and reach audiences six times faster. [45] Studies found that users share outrage-evoking misinformation without reading it first, suggesting emotional manipulation short-circuits critical evaluation. [46]

Countermeasures

Platforms, researchers, and educators have developed various evidence-based countermeasures against rage-baiting. Individual-level interventions include prebunking techniques based on inoculation theory, which have proven effective across cultures; field studies with over 5.4 million users demonstrated that brief videos teaching manipulation recognition can reduce misinformation susceptibility at costs as low as $0.05 per view. [47] Media literacy interventions have shown consistent positive effects in building resilience. Teaching to verify the sources by consulting other websites, known as lateral reading, has significantly improved users' ability to assess content trustworthiness. [48]

See also

References

  1. Thompson 2013.
  2. 1 2 3 Scott-Railton 2022.
  3. 1 2 3 4 Jong-Fast 2022.
  4. 1 2 3 Ashworth 2016.
  5. 1 2 3 Jeans 2014.
  6. 1 2 3 Hom 2015.
  7. 1 2 3 Dastner 2021.
  8. 1 2 Wesley 2022.
  9. 1 2 3 4 Zakrzewski et al. 2021.
  10. Starr 2024.
  11. Oxford University Press 2025.
  12. Wesley 2023.
  13. 1 2 Frampton 2015.
  14. Nygma 2009.
  15. 1 2 Cox 2006.
  16. 1 2 Ryan 2012.
  17. 1 2 3 4 5 Rainie et al. 2022.
  18. 1 2 ThisInterestsMe 2019.
  19. Ohlheiser 2016.
  20. Brady 2017.
  21. Robertson 2023.
  22. Cinelli 2021.
  23. Gillespie 2014.
  24. Chuai & Zhao 2022.
  25. Huszár 2022.
  26. Rusnell 2022.
  27. 1 2 3 Timpf 2018.
  28. Holt 2020.
  29. 1 2 3 Oremus et al. 2021.
  30. 1 2 3 4 5 Milmo 2021.
  31. 1 2 Mahtani 2018.
  32. 1 2 3 Akinwotu 2021.
  33. 1 2 OHCHR 2018.
  34. 1 2 European Parliament 2021.
  35. 1 2 3 Constine 2016.
  36. 1 2 3 4 Merrill & Oremus 2021.
  37. Seetharaman & Horwitz 2020.
  38. Ehrlich 2020.
  39. Orlowski 2020.
  40. Jones 2024.
  41. Karim 2020.
  42. Dempsey 2022.
  43. Piccardi 2025.
  44. Lee 2025.
  45. Vosoughi, Roy & Aral 2018.
  46. McLoughlin 2024.
  47. Roozenbeek 2022.
  48. Breakstone 2021.

Sources