Far-right political groups use mainstream social media platforms for communication, propaganda, and mobilization. These platforms include Facebook, Instagram, TikTok, X (formerly Twitter) and YouTube. [1] By leveraging viral trends, entertaining content, and direct interaction, far-right groups aim to spread their political messages, recruit followers, and foster a sense of community. Such activities are part of broader political processes and activities that involve the organization and spread of political values and ideologies.
The internet has facilitated new channels of communication that significantly impact the spread of news and the dynamics of political discourse. The interactive nature of social media allows far-right groups to reach wider and younger audiences, often using subtle messaging and popular social media tactics. Social media has become a crucial[ according to whom? ] medium for how news and political information are consumed and shared, influencing public perception and civic engagement. [2]
Far-right groups on platforms like TikTok engage with the youth through relatable and often non-political content to subtly promote their ideologies. This approach can affect political participation and election outcomes by shaping opinions and encouraging political involvement. [3] Additionally, social media usage in political campaigns has become increasingly significant due to its communal and interactive nature, as users engage in discussions, share endorsements, and participate in collective actions such as voting encouragement.[ not verified in body ]
Social media platforms are known for enabling anyone with an internet connection to create content and actively participate in political discourse. [4] They enhance access to political information. [5] However, many users primarily consume content passively, with content creation concentrated among a small group of active users. According to a Eurobarometer survey by the European Parliament, 79% of young Europeans aged 15 to 24 follow influencers or content creators on social media, highlighting the increasing use of these platforms for news consumption in this age group. [6]
Far-right influencers use strategies from influencer culture to spread reactionary messages and monetize their politics. They engage in viral stunts and create real-world commotion to gain online visibility, fostering a sense of shared intimacy with their followers. Additionally, they employ provocative tactics such as trolling and humor to build community and disguise hate speech, while also appearing authentic and relatable to maintain audience support. [7] Far-right groups exploit technological affordances of social media platforms to maximize the reach and impact of their messages. They rely on replicability to share and alter content across different platforms, often decontextualizing messages to fit their narrative. Scalability is achieved through strategic use of algorithms and hashtags, allowing for broader audience engagement and visibility. Additionally, connectivity is enhanced by forming online communities that foster in-group solidarity and facilitate the spread of extremist ideologies, bypassing traditional media gatekeepers and leveraging direct communication with followers. [8]
Far-right groups have been exploiting Facebook's algorithmic tendencies to create ideological echo chambers, where conservatives and liberals largely consume different political news, [9] leading to increased political polarization - although research is not totally clear on this point yet. [9] Research has shown that changes to Facebook's algorithm significantly alter what users see and how they interact on the platform, with conservatives engaging more with political news and consuming more content flagged as untrustworthy or inaccurate. This asymmetry facilitates the spread of far-right misinformation, as politically aligned content is prioritized, encouraging conservative users to like, share, and comment more frequently on such posts. [10] In addition to algorithmic manipulation, far-right militias and extremist groups have established strong presences on Facebook, using the platform to organize, recruit, and spread their ideology. [11] They create private groups and pages that foster a sense of community and solidarity among members, often bypassing platform moderation policies. These groups frequently engage in activities designed to provoke conflict and gain visibility, such as trolling and viral stunts, and use Facebook's connectivity features to coordinate real-world actions and protests. Despite Meta's efforts to moderate content, far-right groups continue to leverage Facebook's features to maintain and grow their influence online. [11]
Far-right groups have adeptly utilized Instagram to recruit young followers and spread extremist ideologies. Instagram's visual nature and algorithmic design makes it subsceptible to these activities. [12] Far-right influencers often post aesthetically pleasing images interwoven with subtle far-right symbols and messages. For instance, women influencers play a key by blending personal lifestyle content with right-wing hashtags and symbols like the Black Sun, which have deeper ideological meanings to those aware of their significance. [13] Instagram's algorithmic recommendations gradually expose users to more extremist content, fostering a sense of insider knowledge and belonging within far-right communities. This method creates filter bubbles and echo chambers where users repeatedly encounter content that reinforces their beliefs. [13] [14] For example, right-wing groups exploit hashtags such as #heimatverliebt (love of homeland) to attract followers and gently introduce them to extremist ideologies. Instagram's inadequate moderation has allowed groups like "The British Hand" and the "National Partisan Movement" to recruit young followers with minimal interference. These groups blend mainstream appeal with extremist ideology, using Instagram's visual and social engagement tools to build a community and propagate their messages. The platform's inadequate content moderation makes it particularly vulnerable to far-right exploitation, with extremists using visually engaging content and weakly enforced policies to spread their ideology. [14]
Political entities, such as Germany's far-right party Alternative for Germany (AfD), have also used Instagram's ad features to promote divisive and hateful content. These ads often blame immigrants for societal issues, leveraging emotionally charged imagery, sometimes manipulated by AI, to incite fear and garner support. Despite Meta's policies against hate speech and divisive content, such ads have reached significant audiences, highlighting the challenges in moderating politically charged content on such a large platform. [15] By manipulating platform algorithms and exploiting visual appeal, far-right groups on Instagram have effectively created a recruitment pipeline that subtly guides young users from mainstream content to extremist ideologies, operating in plain sight and often evading content moderation efforts. [13] [14] [15]
Far-right groups have increasingly used TikTok to spread their ideologies, recruit members, and influence political processes, especially targeting young voters. [16] TikTok's user-friendly video tools and personalized content algorithms make it an effective platform for disseminating propaganda. These groups often disguise extremist messages as benign or humorous content, which lowers resistance among younger audiences. [16] Investigations reveal that parties such as Germany's Alternative for Germany (AfD) and Romania's Alliance for the Union of Romanians (AUR) manipulate engagement metrics by purchasing fake followers and likes, enhancing the perceived popularity of their content. This tactic has significantly impacted youth votes in recent European elections. [17] Additionally, the platform has been a conduit for spreading conspiracy theories and misinformation, aligning with pro-Russian narratives and extremist ideologies across various countries. [3] Despite TikTok's assertions of robust policies against harmful content, the platform remains a significant vector for far-right activities.[ citation needed ]
Under Elon Musk's leadership, X (formerly Twitter) has transformed significantly, particularly regarding its openness to far-right and extremist content. Musk, who purchased Twitter in 2022, has positioned himself as a champion of "free speech," subsequently scaling back the platform's moderation efforts. This shift has led to a noticeable increase in right-wing and extremist content, including antisemitism and misinformation. [18] [19] A notable instance reflecting Musk's influence on the platform was the announcement of Ron DeSantis’ 2024 presidential campaign via Twitter Spaces. This event underscored the platform's strategic pivot towards engaging conservative and far-right audiences. [18] Musk's tenure has been characterized by several controversial decisions, such as reinstating accounts previously banned for spreading misinformation and extremist rhetoric. This leniency has fueled the proliferation of far-right content. [19] Media Matters’ investigations have repeatedly highlighted the presence and impact of extremist content on X. A report from Media Matters revealed that advertisements from major corporations were appearing alongside posts with pro-Nazi and white supremacist content. This led to several large advertisers pulling their ads from the platform, emphasizing the ongoing challenge of content moderation. [20] Following this report, Musk announced a lawsuit against Media Matters, arguing that the report exaggerated the prevalence of extremist content. Texas Attorney General Ken Paxton also launched an investigation into Media Matters, aligning with Musk's stance and further politicizing the issue. [20] Overall, the changes under Musk's leadership have made X a more hospitable environment for far-right groups, amplifying their reach and influence in political and social spheres. [19]
On websites that allow users to create content, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves.
The Center for Countering Digital Hate (CCDH), formerly Brixton Endeavors, is a British-American not-for-profit NGO company with offices in London and Washington, D.C. with the stated purpose of stopping the spread of online hate speech and disinformation. It campaigns to deplatform people that it believes promote hate or misinformation, and campaigns to restrict media organisations such as The Daily Wire from advertising. CCDH is a member of the Stop Hate For Profit coalition.
Social network advertising, also known as social media targeting, is a group of terms used to describe forms of online advertising and digital marketing that focus on social networking services. A significant aspect of this type of advertising is that advertisers can take advantage of users' demographic information, psychographics, and other data points to target their ads.
Social media marketing is the use of social media platforms and websites to promote a product or service. Although the terms e-marketing and digital marketing are still dominant in academia, social media marketing is becoming more popular for both practitioners and researchers.
Social media in the fashion industry refers to the use of social media platforms by fashion designers and users to promote and participate in trends. Over the past several decades, the development of social media has increased along with its usage by consumers. The COVID-19 pandemic was a sharp turn of reliance on the virtual sphere for the industry and consumers alike. Social media has created new channels of advertising for fashion houses to reach their target markets. Since its surge in 2009, luxury fashion brands have used social media to build interactions between the brand and its customers to increase awareness and engagement. The emergence of influencers on social media has created a new way of advertising and maintaining customer relationships in the fashion industry. Numerous social media platforms are used to promote fashion trends, with Instagram and TikTok being the most popular among Generation Y and Z. The overall impact of social media in the fashion industry included the creation of online communities, direct communication between industry leaders and consumers, and criticized ideals that are promoted by the industry through social media.
Viral phenomena or viral sensations are objects or patterns that are able to replicate themselves or convert other objects into copies of themselves when these objects are exposed to them. Analogous to the way in which viruses propagate, the term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. This concept has become a common way to describe how thoughts, information, and trends move into and through a human population.
The Islamic State is a militant group and a former unrecognised proto-state. The group sophisticatedly utilizes social media as a tool for spreading its message and for international recruitment. The Islamic State is widely known for its posting of disturbing content, such as beheading videos, on the internet. This propaganda is disseminated through websites and many social media platforms such as Twitter, Facebook, Telegram, and YouTube. By utilizing social media, the organization has garnered a strong following and successfully recruited tens of thousands of followers from around the world. In response to its successful use of social media, many websites and social media platforms have banned accounts and removed content promoting the Islamic State from their platforms.
Online youth radicalization is the action in which a young individual or a group of people come to adopt increasingly extreme political, social, or religious ideals and aspirations that reject, or undermine the status quo or undermine contemporary ideas and expressions of a state, which they may or may not reside in. Online youth radicalization can be both violent or non-violent.
Online hate speech is a type of speech that takes place online with the purpose of attacking a person or a group based on their race, religion, ethnic origin, sexual orientation, disability, and/or gender. Online hate speech is not easily defined, but can be recognized by the degrading or dehumanizing function it serves.
Deplatforming, also called no-platforming, is a form of Internet censorship of an individual or group by preventing them from posting on the platforms they use to share their information/ideas. This typically involves suspension, outright bans, or reducing spread.
TikTok, known in mainland China and Hong Kong as Douyin, is a short-form video-hosting service owned by Chinese internet company ByteDance. It hosts user-submitted videos, which may range in duration from three seconds to 60 minutes. It can be accessed both through a mobile app or through its website.
Social media use in politics refers to the use of online social media platforms in political processes and activities. Political processes and activities include all activities that pertain to the governance of a country or area. This includes political organization, global politics, political corruption, political parties, and political values. The media's primary duty is to present us with information and alert us when events occur. This information may affect what we think and the actions we take. The media can also place pressure on the government to act by signaling a need for intervention or showing that citizens want change
Social media was used extensively in the 2020 United States presidential election. Both incumbent president Donald Trump and Democratic Party nominee Joe Biden's campaigns employed digital-first advertising strategies, prioritizing digital advertising over print advertising in the wake of the pandemic. Trump had previously utilized his Twitter account to reach his voters and make announcements, both during and after the 2016 election. The Democratic Party nominee Joe Biden also made use of social media networks to express his views and opinions on important events such as the Trump administration's response to the COVID-19 pandemic, the protests following the murder of George Floyd, and the controversial appointment of Amy Coney Barrett to the Supreme Court.
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.
Libs of TikTok is a handle for various far-right and anti-LGBT social-media accounts operated by Chaya Raichik, a former real estate agent. Raichik uses the accounts to repost content created by left-wing and LGBT people on TikTok, and on other social-media platforms, often with hostile, mocking, or derogatory commentary. The accounts promote hate speech and transphobia, and spread false claims, especially relating to medical care of transgender children. The Twitter account, also known by the handle @LibsofTikTok, has over 3.5 million followers as of September 2024 and has become influential among American conservatives and the political right. Libs of TikTok's social-media accounts have received several temporary suspensions and a permanent suspension from TikTok.
The alt-right pipeline is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups. This process is most commonly associated with and has been documented on the video platform YouTube, and is largely faceted by the method in which algorithms on various social media platforms function through the process recommending content that is similar to what users engage with, but can quickly lead users down rabbit-holes. The effects of YouTube's algorithmic bias in radicalizing users has been replicated by one study, although two other studies found little or no evidence of a radicalization process.
Ronald Merle McNutt was a 33-year-old American man and US Army Reserve veteran from New Albany, Mississippi, who died from suicide by shooting himself under his chin on a Facebook livestream. Recordings of the livestream went viral on various social media platforms due to its shock value.
In internet slang, rage-baiting is the manipulative tactic of eliciting outrage with the goal of increasing internet traffic, online engagement, revenue and support. Rage baiting or farming can be used as a tool to increase engagement, attract subscribers, followers, and supporters, which can be financially lucrative. Rage baiting and rage farming manipulates users to respond in kind to offensive, inflammatory headlines, memes, tropes, or comments.
The online video platform TikTok has had worldwide a social, political, and cultural impact since its global launch in September 2016. The platform has rapidly grown its userbase since its launch and surpassed 2 billion downloads in October, 2020. It became the world's most popular website, ahead of Google, for the year 2021. TikTok's diverse content ecosystem includes popular niches such as music, fitness, beauty, education, and gaming, which cater to a wide range of audiences.
Antisemitism on social media can manifest in various forms such as emojis, GIFs, memes, comments, and reactions to content. Studies have categorized antisemitic discourse into different types: hate speech, calls for violence, dehumanization, conspiracy theories and Holocaust denial.
{{cite web}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite news}}
: |last2=
has generic name (help)