Rage-baiting

Last updated

Rage-baiting or rage-farming is internet slang that refers to a manipulative tactic to elicit outrage with the goal of increasing internet traffic, online engagement, revenue and support. [1] [2] Rage baiting or farming can be used as a tool to increase engagement, attract subscribers, followers, and supporters, which can be financially lucrative. [3] Rage baiting and rage farming manipulates users to respond in kind to offensive, inflammatory headlines, memes, tropes, or comments. [4] [5] [6] [7]

Contents

Rage-farming, which has been cited since at least January 2022, is an offshoot of rage-baiting where the outrage of the person being provoked is farmed or manipulated into an online engagement by rage-seeding that helps amplify the message of the original content creator. [2] [8] [9] It has also been used as a political tactic at the expense of one's opponent.

Political scientist Jared Wesley of the University of Alberta stated in 2022 that the use of the tactic of rage farming was on the rise with right-wing politicians employing the technique by "promoting conspiracy theories and misinformation." As politicians increase rage farming against their political and ideological opponents, they attract more followers online, some of whom may engage in offline violence, including verbal violence and acts of intimidation. Wesley describes how those engaged in rage farming combine half-truths with "blatant lies". [10]

Rage farming is from rage + farm. Rage-seeding, rage-bait, rage baiting, and outrage baiting are similar Internet slang neologisms referring to manipulative tactics that feed on readers' anxieties and fears. They are all forms of clickbait, a term used used since c. 1999, which is "more nuanced" and not necessarily seen as a negative tactic. [11] [12] The term rage bait, which has been cited since at least 2009, is a negative form of click-baiting as it relies on manipulating users to respond in kind to offensive, inflammatory "headlines", memes, tropes, or comments. [4] [5] [6] [7]

In his 2022 tweet, a senior researcher at Citizen Lab, John Scott-Railton, described how a person was "being rage-farmed" when they responded to an inflammatory post with an equally inflammatory quote tweet as quote tweets reward the original rage tweet. Algorithms on social media such as Facebook, Twitter, TikTok, Instagram, and YouTube were discovered to reward increased positive and negative engagement by directing traffic to posts and amplifying them. [1]

In an Atlantic article on Republican strategy, American writer Molly Jong-Fast described rage farming as "the product of a perfect storm of fuckery, an unholy mélange of algorithms and anxiety". [2]

Political scientist Jared Wesley wrote that rage farming was often "used to describe rhetoric designed to elicit the rage of opponents." [8] Rage-baiting is used to describe a tactic to attract, maintain, and increase a base of supporters and followers. [7]

Clickbait, in all its iterations, including rage-baiting and farming, is a form of media manipulation, specifically Internet manipulation. While the goal of some clickbait is to generate revenue, it can also be used as effective tactic to influence people on social media platforms, such as Facebook, Twitter, Instagram, and YouTube. [11] According to a November 2016 analysis of Facebook, clickbaits are intentionally designed to a targeted interest group's pre-existing confirmation biases. Facebook's algorithms used a filter bubble that shares specific posts to a filtered audience. [13]

A Westside Seattle Herald article published May 2016 cited the definition from the online Urban Dictionary, "a post on social media by a news organisation designed expressly to outrage as many people as possible in order to generate interaction." [5] [6] The Herald article described how increased user traffic online results equals in more revenue for online platforms and websites from paid advertisements and sponsors. [6]

A May 25, 2016 article described ragebait as "clickbait's evil twin." [4]

A 2006 article in Time magazine described how Internet trolls post incendiary comments online with the sole purpose of provoking an argument even on the most banal topics. A statement like "NASCAR is about as much a sport as cheerleading" in a car-racing forum or openly supporting open borders to Lou Dobbs is cited as an example. [14]

Rage bait and outrage bait creators invent "controversial news stories out of thin air". [15] The example cited was a 15 December 2018 Irish digital media company ad falsely claiming that two thirds of people wanted Santa to be either female or gender neutral. [15]

Harry Seitz argued in a 2021 Medium article that ragebait is a form of internet trolling. The troll laughs "all the way to the bank" while irate readers comment and complain. [16]

As early as 2012, research suggested that in both media and politics, eliciting outrage is a powerful tool in media manipulation. [17] [18] In political media, both real and imagined outrage attract readers, making rage-evoking narratives very popular. [18]

Background

A 2012 Journal of Politics (JOP) article found that political actors were intentionally incorporating emotional content to evoke anxiety into their messaging to elicit interest in a topic. [17] The article questioned why this political tactic resulted in viewers feeling more anger than anxiety. The study found that anger increased information-seeking behaviour and often resulted in web users clicking on the political website to learn more. [17] The research said there were also psychological incentives to use angry rhetoric in political communication. [17] A 2018 Media Matters for America article citing the JOP journal, reiterated that "anger is a powerful tool in the worlds of both politics and media." [18] The political media industry knows that real or imagined outrage attracts readers making narratives that evoke it very popular in political media. [18]

A November 2018 National Review article decrying social-justice warriors was cited as an example of rage-baiting by Media Matters for America. [19] [18] The Review article was in response to Tweets criticizing the cartoon image used by the ABC's Twitter account to advertise A Charlie Brown Thanksgiving on November 21, 2018. [19] Franklin, the Black friend was sitting all alone on one side of Charlie Brown's Thanksgiving dinner table. [19] Several unverified accounts by Twitter users, including one with zero followers, called the image racist. [18] Conservatives were so frustrated by these overly sensitive, politically correct, "snowflake" liberals who posted, that they in turn responded in anger. The Media Matters for America article said that there was irony in the way in which the National Review article which intended to illustrate how liberals were too easily provoked to anger, actually succeeded in enraging conservatives. [18]

Information technologies and digital media enable unprecedented capacities for online manipulation, [20] including click-baiting, rage baiting and rage farming. In his January 7, 2022 tweet, John Scott-Railton described how a person was "being rage farmed" when they responded to an inflammatory post with an equally inflammatory quote tweet since algorithms on Twitter, TikTok, YouTube, Facebook and other social media platforms, reward posts that attract engagement by amplifying the posts. [1]

A 2020 review of the conservative Canadian online news magazine, The Post Millennial , which was started in 2017, said it was far-right America's most recent rage-baiting outlet. [21]

Examples of rage farming

Social media

Rage farming and rage baiting are most recent iterations of clickbait and other forms of Internet manipulation that use conspiracy theories and misinformation to fuel anger and engage users. Facebook has been "blamed for fanning sectarian hatred, steering users toward extremism and conspiracy theories, and incentivizing politicians to take more divisive stands," according to a 2021 Washington Post report. [22] In spite of previous reports on changes to its News Feed algorithms to reduce clickbait, revelations by Facebook whistleblower Frances Haugen and content from the 2021 Facebook leak, informally referred to as the Facebook Papers, provide evidence of the role the company's News Feed algorithm had played. [22]

Media and governmental investigations in the wake of revelations from Facebook whistleblower, Frances Haugen, and the 2021 Facebook leak, provide insight into the role various algorithms play in farming outrage for profit by spreading divisiveness, conspiracy theories and sectarian hatred that can allegedly contribute to real-world violence. [22] A highly criticized example was when Facebook, with over 25 million accounts in Myanmar, neglected to police rage-inducing hate speech posts targeting the Rohingya Muslim minority in Myanmar that allegedly facilitated the Rohingya genocide. [23] [24] [25] [9] [26] [27] In 2021, a US$ 173 billion class action lawsuit filed against Meta Platforms Inc (the new name of Facebook) on behalf of Rohingya refugees claimed that Facebook's "algorithms amplified hate speech." [23]

In response to complaints about clickbait on Facebook's News Feed and News Feed ranking algorithm, in 2014 and again in 2016, the company introduced an anti-clickbait algorithm to remove sites from their News Feed that frequently use headlines that "withhold, exaggerate or distort information." [28]

A February 2019 article that was promoted in Facebook described how outrage bait made people angry "on purpose". [15] Digital media companies and social media actors incite outrage to increase engagement; "clicks, comments, likes and shares", which generate "more advertising revenue". [15] If content does not increase engagement, "timeline algorithm" limits the number of users that this uninteresting content can reach. [15] According to this article, when geared up on its war against clickbait, algorithm changed, which made it harder for creators and sites to use clickbait. The article said that a new engagement strategy was introduced to replace clickbait, whether rage bait or outrage bait. [15]

The 2016 algorithms were allegedly trained to filter phrases that were frequently used in clickbait headlines similar to filters that remove email spam. [28] Publishers who continue to use clickbait were allegedly punished through loss of referral traffic. [28]

Starting in 2017, Facebook engineers changed their ranking algorithm to score emoji reactions five times higher than mere "likes" because emojis extended user engagement, according to a 26 October 2021 Washington Post article. Facebook's business model depended on keeping and increasing user engagement. [29] One of Facebook's researchers raised concerns that the algorithms that rewarded "controversial" posts including those that incited outrage, could inadvertently result in more spam, abuse, and clickbait. [29]

Since 2018, Facebook executives had been warned in a slide presentation that their algorithms promoted divisiveness but they refused to act. [30] In a 2022 interview Scott-Railton had observed that the amplification by algorithms of these inflammatory quote tweets in rage farming that looped upon themselves may have been planned and structural or accidental. [2] Algorithms reward positive and negative engagement. This creates a "genuine dilemma for everyone". Algorithms also allow politicians to bypass legacy media outlets that fact-check, by giving them access to a targeted uncritical audience who are very receptive of their messaging, even when it is misinformation. [18]

By 2019, Facebook's data scientists confirmed that posts that incited the angry emoji were "disproportionately likely to include misinformation, toxicity and low-quality news." [29]

The 2020 Netflix docudrama The Social Dilemma analyzed how social media was intentionally designed for profit maximization through Internet manipulation which can include spreading conspiracy theories and disinformation and promoting problematic social media use. [31] Topics covered in the film included the role of social media in political polarization in the United States, political radicalization, including online youth radicalization, the spread of fake news and as a propaganda tool used by political parties and governmental bodies. Social media networks have three main goals: to maintain and increase engagement, growth, and advertisement income, according to a former Google design ethicist. [32]

Facebook outside the United States

A 2021 report by the Washington Post revealed that Facebook did not adequately police its service outside the United States. [25] The company invested only 16% of its budget to fight misinformation and hate speech in countries outside the United States, such as France, Italy, and India where English is not the maternal language. In contrast, the company allocated 84% to the United States which only represents 10% of Facebook's daily users. [9]

Since at least 2019, Facebook employees were aware of how "vulnerable these countries, like India, were to "abuse by bad actors and authoritarian regimes" but did nothing to block accounts that published hate speech and incited violence. [9] In their 2019 434-page report submitted to the Office of the United Nations High Commissioner for Human Rights on the findings of the Independent International Fact-Finding Mission on Myanmar, the role of social media in disseminating hate speech and inciting violence in the anti-Muslim riots and the Rohingya genocide was investigated. Facebook was mentioned 289 times in the report as there are millions of Facebook accounts in that country. [26] Following the publication of an earlier version of the report in August, Facebook took the "rare step" of removing accounts that represented 12 million followers implicated in the reports findings. [24]

In October 2021, Haugen testified at a United States Senate committee that Facebook had been inciting ethnic violence in Myanmar which has over 25 million Facebook users and in Ethiopia through its algorithms that promoted posts inciting or glorifying violence. False claims about Muslims stockpiling weapons were not removed. [25]

The Digital Services Act is a European legislative proposal to strengthen rules on fighting disinformation and harmful content, that was submitted by the European Commission to the European Parliament and the Council of the European Union partially in response to concerns raised by the Facebook Files and revelations in Haugen's testimony in the European Parliament. [27] In 2021, a c$. US 173 billion dollar class action lawsuit was lodged by law firms Edelson PC and Fields PLLC against Meta Platforms Inc, formerly known as Facebook in the United States District Court for the Northern District of California on behalf of Rohinga refugees, claiming that Facebook was negligent in not removing inflammatory posts that facilitated the Rohingya genocide in Myanmar. The lawsuit said that Facebook's "algorithms amplified hate speech." [23]

Following its launch in Myanmar in 2011, Facebook "quickly became ubiquitous." [23] A report commissioned by Facebook led to the company's admission in 2018, that they had been failed to do "enough to prevent the incitement of violence and hate speech against the [...]Muslim minority in Myanmar." The independent report found that "Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence". [23]

See also

Related Research Articles

<span class="mw-page-title-main">Troll (slang)</span> Person who sows discord online

In slang, a troll is a person who posts deliberately offensive or provocative messages online or who performs similar behaviors in real life. The methods and motivations of trolls can range from benign to sadistic. These messages can be inflammatory, insincere, digressive, extraneous, or off-topic, and may have the intent of provoking others into displaying emotional responses, or manipulating others' perception, thus acting as a bully or a provocateur. The behavior is typically for the troll's amusement, or to achieve a specific result such as disrupting a rival's online activities or purposefully causing confusion or harm to other people. Trolling behaviors involve tactical aggression to incite emotional responses, which can adversely affect the target's well-being.

<span class="mw-page-title-main">Human rights in Myanmar</span> Overview of human rights in Myanmar

Human rights in Myanmar under its military regime have long been regarded as among the worst in the world. In 2022, Freedom House rated Myanmar’s human rights at 9 out 100.

<span class="mw-page-title-main">Facebook</span> Social-networking service owned by Meta Platforms

Facebook is a social media and social networking service owned by the American technology conglomerate Meta. Created in 2004 by Mark Zuckerberg with four other Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz, and Chris Hughes, its name derives from the face book directories often given to American university students. Membership was initially limited to Harvard students, gradually expanding to other North American universities. Since 2006, Facebook allows everyone to register from 13 years old, except in the case of a handful of nations, where the age limit is 14 years. As of December 2022, Facebook claimed almost 3 billion monthly active users. As of October 2023, Facebook ranked as the 3rd most visited website in the world, with 22.56% of its traffic coming from the United States. It was the most downloaded mobile app of the 2010s.

Social network advertising, also known as social media targeting, is a group of terms used to describe forms of online advertising and digital marketing focusing on social networking services. A significant aspect of this type of advertising is that advertisers can take advantage of users' demographic information, psychographics, and other data points to target their ads.

Facebook is a social networking service that has been gradually replacing traditional media channels since 2010. Facebook has limited moderation of the content posted to its site. Because the site indiscriminately displays material publicly posted by users, Facebook can, in effect, threaten oppressive governments. Facebook can simultaneously propagate fake news, hate speech, and misinformation, thereby undermining the credibility of online platforms and social media.

Ashin Wirathu is a Burmese Buddhist monk, and the leader of the 969 Movement in Myanmar. He has incited the persecution of Muslims in Myanmar through his speeches. Facebook banned his page on the charge of allegedly spreading religious hatred towards other communities, after repeated warnings to not post religiously inflammatory content.

<span class="mw-page-title-main">Clickbait</span> Web content intended to entice users to click on a link

Clickbait is a text or a thumbnail link that is designed to attract attention and to entice users to follow ("click") that link and read, view, or listen to the linked piece of online content, being typically deceptive, sensationalized, or otherwise misleading. A "teaser" aims to exploit the "curiosity gap", providing just enough information to make readers of news websites curious, but not enough to satisfy their curiosity without clicking through to the linked content. Clickbait headlines often add an element of dishonesty, using enticements that do not accurately reflect the content being delivered. The "-bait" suffix makes an analogy with fishing, where a hook is disguised by an enticement (bait), presenting the impression to the fish that it is a desirable thing to swallow.

ClickHole is a satirical website that parodies clickbait websites such as BuzzFeed and Upworthy. It was launched on June 12, 2014, in conjunction with The Onion's decision to stop its print edition and shift its focus exclusively to the internet. According to ClickHole's senior editor, Jermaine Affonso, the website is "The Onion's response to click-bait content" and serves as "a parody of online media". Critics noted that, on a deeper level, ClickHole illustrates the shallow nature of social media content and media sites' desperation to share such content.

<span class="mw-page-title-main">Minds (social network)</span> Open-source social networking service

Minds is an open-source and distributed social network. Users can earn cryptocurrency for using Minds, and tokens can be used to boost their posts or crowdfund other users. Minds has been described as more privacy-focused than mainstream social media networks.

Outrage porn is any type of media or narrative designed to use outrage to provoke strong emotional reactions for the purpose of expanding audiences or increasing engagement. The term outrage porn was coined in 2009 by The New York Times political cartoonist and essayist Tim Kreider.

A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.

Internet manipulation is the co-optation of online digital technologies, including algorithms, social bots, and automated scripts, for commercial, social, military, or political purposes. Internet and social media manipulation are the prime vehicles for spreading disinformation due to the importance of digital platforms for media consumption and everyday communication. When employed for political purposes, internet manipulation may be used to steer public opinion, polarise citizens, circulate conspiracy theories, and silence political dissidents. Internet manipulation can also be done for profit, for instance, to harm corporate or political adversaries and improve brand reputation. Internet manipulation is sometimes also used to describe the selective enforcement of Internet censorship or selective violations of net neutrality.

<span class="mw-page-title-main">Rohingya genocide</span> Ongoing ethnic cleansing in Myanmar

The Rohingya genocide is a series of ongoing persecutions and killings of the Muslim Rohingya people by the military of Myanmar. The genocide has consisted of two phases to date: the first was a military crackdown that occurred from October 2016 to January 2017, and the second has been occurring since August 2017. The crisis forced over a million Rohingya to flee to other countries. Most fled to Bangladesh, resulting in the creation of the world's largest refugee camp, while others escaped to India, Thailand, Malaysia, and other parts of South and Southeast Asia, where they continue to face persecution. Many other countries consider these events ethnic cleansing.

<span class="mw-page-title-main">Feed (Facebook)</span> Feature of the social network Facebook

Facebook's Feed, formerly known as the News Feed, is a web feed feature for the social network. The feed is the primary system through which users are exposed to content posted on the network. Feed highlights information that includes profile changes, upcoming events, and birthdays, among other updates. Using a proprietary method, Facebook selects a handful of updates to show users every time they visit their feed, out of an average of 2,000 updates they can potentially receive. Over two billion people use Facebook every month, making the network's Feed the most viewed and most influential aspect of the news industry. The feature, introduced in 2006, was renamed "Feed" in 2022.

<span class="mw-page-title-main">Occupy Democrats</span> American left-wing political Facebook page and website

Occupy Democrats is an American left-wing media outlet built around a Facebook page and corresponding website. Established in 2012, it publishes false information, hyperpartisan content, and clickbait. Posts originating from the Occupy Democrats Facebook page are among the most widely shared political content on Facebook.

Online hate speech is a type of speech that takes place online with the purpose of attacking a person or a group based on their race, religion, ethnic origin, sexual orientation, disability, and/or gender. Online hate speech is not easily defined, but can be recognized by the degrading or dehumanizing function it serves.

Social media use in politics refers to the use of online social media platforms in political processes and activities. Political processes and activities include all activities that pertain to the governance of a country or area. This includes political organization, global politics, political corruption, political parties, and political values. The media's primary duty is to present us with information and alert us when events occur. This information may affect what we think and the actions we take. The media can also place pressure on the government to act by signaling a need for intervention or showing that citizens want change

Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.

In 2021, an internal document leak from the company then known as Facebook showed it was aware of harmful societal effects from its platforms, yet persisted in prioritizing profit over addressing these harms. The leak, released by whistleblower Frances Haugen, resulted in reporting from The Wall Street Journal in September, as The Facebook Files series, as well as the Facebook Papers, by a consortium of news outlets the next month.

<span class="mw-page-title-main">Facebook content management controversies</span> Criticism of Facebooks content management

Facebook or Meta Platforms has been criticized for its management of various content on posts, photos and entire groups and profiles. This includes but is not limited to allowing violent content, including content related to war crimes, and not limiting the spread of fake news and COVID-19 misinformation on their platform, as well as allowing incitement of violence against multiple groups.

References

  1. 1 2 3 Scott-Railton 2022.
  2. 1 2 3 4 Jong-Fast 2022.
  3. Thompson 2013.
  4. 1 2 3 Ashworth 2016.
  5. 1 2 3 Jeans 2014.
  6. 1 2 3 4 Hom 2015.
  7. 1 2 3 Dastner 2021.
  8. 1 2 Wesley 2022.
  9. 1 2 3 4 Zakrzewski et al. 2021.
  10. Rusnell 2022.
  11. 1 2 Frampton 2015.
  12. Nygma 2009.
  13. Ohlheiser 2016.
  14. Cox 2006.
  15. 1 2 3 4 5 6 ThisInterestsMe 2019.
  16. Seitz 2021.
  17. 1 2 3 4 Ryan 2012.
  18. 1 2 3 4 5 6 7 8 Rainie et al. 2022.
  19. 1 2 3 Timpf 2018.
  20. Susser, Roessler & Nissenbaum 2019.
  21. Holt 2020.
  22. 1 2 3 Oremus et al. 2021.
  23. 1 2 3 4 5 Milmo 2021.
  24. 1 2 Mahtani 2018.
  25. 1 2 3 Akinwotu 2021.
  26. 1 2 OHCHR 2018.
  27. 1 2 European Parliament 2021.
  28. 1 2 3 Constine 2016.
  29. 1 2 3 Merrill & Oremus 2021.
  30. Seetharaman & Horwitz 2020.
  31. Ehrlich 2020.
  32. Orlowski 2020.

Sources