Internet manipulation

Last updated

Internet manipulation is the co-optation of online digital technologies, including algorithms, social bots, and automated scripts, for commercial, social, military, or political purposes. [1] Internet and social media manipulation are the prime vehicles for spreading disinformation due to the importance of digital platforms for media consumption and everyday communication. [2] When employed for political purposes, internet manipulation may be used to steer public opinion, [3] polarise citizens, [4] circulate conspiracy theories, [5] and silence political dissidents. Internet manipulation can also be done for profit, for instance, to harm corporate or political adversaries and improve brand reputation. [6] Internet manipulation is sometimes also used to describe the selective enforcement of Internet censorship [7] [8] or selective violations of net neutrality. [9]

Contents

Issues

Algorithms, echo chambers and polarization

Due to overabundance of online content, social networking platforms and search engines have leveraged algorithms to tailor and personalize users' feeds based on their individual preferences. However, algorithms also restrict exposure to different viewpoints and content, leading to the creation of echo chambers or filter bubbles. [5] [22]

With the help of algorithms, filter bubbles influence users' choices and perception of reality by giving the impression that a particular point of view or representation is widely shared. Following the 2016 referendum of membership of the European Union in the United Kingdom and the United States presidential elections, this gained attention as many individuals confessed their surprise at results that seemed very distant from their expectations. The range of pluralism is influenced by the personalized individualization of the services and the way it diminishes choice. [23] Five manipulative verbal influences were found in media texts. There are self-expression, semantic speech strategies, persuasive strategies, swipe films and information manipulation. The vocabulary toolkit for speech manipulation includes euphemism, mood vocabulary, situational adjectives, slogans, verbal metaphors, etc. [24]

Research on echo chambers from Flaxman, Goel, and Rao, [25] Pariser, [26] and Grömping [27] suggest that use of social media and search engines tends to increase ideological distance among individuals.

Comparisons between online and off-line segregation have indicated how segregation tends to be higher in face-to-face interactions with neighbors, co-workers, or family members, [28] and reviews of existing research have indicated how available empirical evidence does not support the most pessimistic views about polarization. [29] A 2015 study suggested that individuals' own choices drive algorithmic filtering, limiting exposure to a range of content. [30] While algorithms may not be causing polarization, they could amplify it, representing a significant component of the new information landscape. [31]

Research and use by intelligence and military agencies

Some of the leaked JTRIG operation methods/techniques JTRIG report 2011 - slide 1.png
Some of the leaked JTRIG operation methods/techniques

The Joint Threat Research Intelligence Group unit of the Government Communications Headquarters (GCHQ), the British intelligence agency [32] was revealed as part of the global surveillance disclosures in documents leaked by the former National Security Agency contractor Edward Snowden [33] and its mission scope includes using "dirty tricks" to "destroy, deny, degrade [and] disrupt" enemies. [33] [34] Core-tactics include injecting false material onto the Internet in order to destroy the reputation of targets and manipulating online discourse and activism for which methods such as posting material to the Internet and falsely attributing it to someone else, pretending to be a victim of the target individual whose reputation is intended to be destroyed and posting "negative information" on various forums may be used. [35]

Known as "Effects" operations, the work of JTRIG had become a "major part" of GCHQ's operations by 2010. [33] The unit's online propaganda efforts (named "Online Covert Action"[ citation needed ]) utilize "mass messaging" and the "pushing [of] stories" via the medium of Twitter, Flickr, Facebook and YouTube. [33] Online "false flag" operations are also used by JTRIG against targets. [33] JTRIG have also changed photographs on social media sites, as well as emailing and texting colleagues and neighbours with "unsavory information" about the targeted individual. [33] In June 2015, NSA files published by Glenn Greenwald revealed new details about JTRIG's work at covertly manipulating online communities. [36] The disclosures also revealed the technique of "credential harvesting", in which journalists could be used to disseminate information and identify non-British journalists who, once manipulated, could give information to the intended target of a secret campaign, perhaps providing access during an interview. [33] It is unknown whether the journalists would be aware that they were being manipulated. [33]

Furthermore, Russia is frequently accused of financing "trolls" to post pro-Russian opinions across the Internet. [37] The Internet Research Agency has become known for employing hundreds of Russians to post propaganda online under fake identities in order to create the illusion of massive support. [38] In 2016 Russia was accused of sophisticated propaganda campaigns to spread fake news with the goal of punishing Democrat Hillary Clinton and helping Republican Donald Trump during the 2016 presidential election as well as undermining faith in American democracy. [39] [40] [41]

In a 2017 report [42] Facebook publicly stated that its site has been exploited by governments for the manipulation of public opinion in other countries – including during the presidential elections in the US and France. [17] [43] [44] It identified three main components involved in an information operations campaign: targeted data collection, content creation and false amplification and includes stealing and exposing information that is not public; spreading stories, false or real, to third parties through fake accounts; and fake accounts being coordinated to manipulate political discussion, such as amplifying some voices while repressing others. [45] [46]

In politics

In 2016 Andrés Sepúlveda disclosed that he manipulated public opinion to rig elections in Latin America. According to him with a budget of $600,000 he led a team of hackers that stole campaign strategies, manipulated social media to create false waves of enthusiasm and derision, and installed spyware in opposition offices to help Enrique Peña Nieto, a right-of-center candidate, win the election. [47] [48]

In the run up to India's 2014 elections, both the Bharatiya Janata party (BJP) and the Congress party were accused of hiring "political trolls" to talk favourably about them on blogs and social media. [37]

The Chinese government is also believed to run a so-called "50-cent army" (a reference to how much they are said to be paid) and the "Internet Water Army" to reinforce favourable opinion towards it and the Chinese Communist Party (CCP) as well as to suppress dissent. [37] [49]

In December 2014 the Ukrainian information ministry was launched to counter Russian propaganda with one of its first tasks being the creation of social media accounts (also known as the i-Army) and amassing friends posing as residents of eastern Ukraine. [37] [50]

Twitter suspended a number of bot accounts that appeared to be spreading pro-Saudi Arabian tweets about the disappearance of Saudi dissident journalist Jamal Khashoggi. [51]

A report by Mediapart claimed that the UAE, through a secret services agent named Mohammed, was using a Switzerland-based firm Alp Services to run manipulation campaigns against Emirati opponents. Alp Services head, Mario Brero used fictitious accounts that were publishing fake articles under pseudonyms to attack Qatar and the Muslim Brotherhood networks in Europe. The UAE assigned Alp to publish at least 100 articles per year that were critical of Qatar. [52]

In business and marketing

Trolling and other applications

Hackers, hired professionals and private citizens have all been reported to engage in internet manipulation using software, including Internet bots such as social bots, votebots and clickbots. [53] In April 2009, Internet trolls of 4chan voted Christopher Poole, founder of the site, as the world's most influential person of 2008 with 16,794,368 votes by an open Internet poll conducted by Time magazine. [54] The results were questioned even before the poll completed, as automated voting programs and manual ballot stuffing were used to influence the vote. [55] [56] [57] 4chan's interference with the vote seemed increasingly likely, when it was found that reading the first letter of the first 21 candidates in the poll spelled out a phrase containing two 4chan memes: "Marblecake. Also, The Game". [58]

Jokesters and politically oriented hacktivists may share sophisticated knowledge of how to manipulate the Web and social media. [59]

Countermeasures

In Wired it was noted that nation-state rules such as compulsory registration and threats of punishment are not adequate measures to combat the problem of online bots. [60]

To guard against the issue of prior ratings influencing perception several websites such as Reddit have taken steps such as hiding the vote-count for a specified time. [16]

Some other potential measures under discussion are flagging posts for being likely satire or false. [61] For instance in December 2016 Facebook announced that disputed articles will be marked with the help of users and outside fact checkers. [62] The company seeks ways to identify 'information operations' and fake accounts and suspended 30,000 accounts before the presidential election in France in a strike against information operations. [17]

Inventor of the World Wide Web Tim Berners-Lee considers putting few companies in charge of deciding what is or is not true a risky proposition and states that openness can make the web more truthful. As an example he points to Wikipedia which, while not being perfect, allows anyone to edit with the key to its success being not just the technology but also the governance of the site. Namely, it has an army of countless volunteers and ways of determining what is or is not true. [63]

Furthermore, various kinds of software may be used to combat this problem such as fake checking software or voluntary browser extensions that store every website one reads or use the browsing history to deliver fake revelations to those who read a fake story after some kind of consensus was found on the falsehood of a story.[ original research? ]

Furthermore, Daniel Suarez asks society to value critical analytic thinking and suggests education reforms such as the introduction of 'formal logic' as a discipline in schools and training in media literacy and objective evaluation. [61]

Government responses

According to a study of the Oxford Internet Institute, at least 43 countries around the globe have proposed or implemented regulations specifically designed to tackle different aspects of influence campaigns, including fake news, social media abuse, and election interference. [64]

Germany

In Germany, during the period preceding the elections in September 2017, all major political parties save AfD publicly announced that they would not use social bots in their campaigns. Additionally, they committed to strongly condemning such usage of online bots.

Moves towards regulation on social media have been made: three German states Hessen, Bavaria, and Saxony-Anhalt proposed in early 2017 a law that would mean social media users could face prosecution if they violate the terms and conditions of a platform. For example, the use of a pseudonym on Facebook, or the creation of fake account, would be punishable by up to one year's imprisonment. [65]

Italy

In early 2018, the Italian Communications Agency AGCOM published a set of guidelines on its website, targeting the elections in March that same year. The six main topics are: [66]

  1. Political Subjects's Equal Treatment
  2. Political Propaganda's Transparency
  3. Contents Illicit and Activities Whose Dissemination Is Forbidden (i.e. Polls)
  4. Social Media Accounts of Public Administrations
  5. Political Propaganda is Forbidden on Election Day and Day Before
  6. Recommendations for stronger fact-checking services

France

In November 2018, a law against the manipulation of information was passed in France. The law stipulates that during campaign periods: [67]

  • Digital platforms must disclose the amount paid for ads and the names of their authors. Past a certain traffic threshold, platforms are required to have a representative present in France, and must publish the algorithms used.
  • An interim judge may pass a legal injunction to halt the spread of fake news swiftly. 'Fake news' must satisfy the following: (a)it must be manifest; (b) it must be disseminated on a massive scale; and (c) lead to a disturbance of the peace or compromise the outcome of an election.

Malaysia

In April 2018, the Malaysian parliament passed the Anti-Fake News Act. It defined fake news as 'news, information, data and reports which is or are wholly or partly false.' [68] This applied to citizens or those working at a digital publication, and imprisonment of up to 6 years was possible. However, the law was repealed after heavy criticism in August 2018. [69]

Kenya

In May 2018, President Uhuru Kenyatta signed into law the Computer and Cybercrimes bill, that criminalised cybercrimes including cyberbullying and cyberespionage. If a person "intentionally publishes false, misleading or fictitious data or misinforms with intent that the data shall be considered or acted upon as authentic," they are subject to fines and up to two years imprisonment. [70]

Research

German chancellor Angela Merkel has issued the Bundestag to deal with the possibilities of political manipulation by social bots or fake news. [71]

See also

Sources

Definition of Free Cultural Works logo notext.svg  This article incorporates text from a free content work. Licensed under CC BY SA 3.0 IGO( license statement/permission ). Text taken from World Trends in Freedom of Expression and Media Development Global Report 2017/2018 , 202, University of Oxford, UNESCO.

Related Research Articles

<span class="mw-page-title-main">Troll (slang)</span> Person who sows discord online

In slang, a troll is a person who posts deliberately offensive or provocative messages online or who performs similar behaviors in real life. The methods and motivations of trolls can range from benign to sadistic. These messages can be inflammatory, insincere, digressive, extraneous, or off-topic, and may have the intent of provoking others into displaying emotional responses, or manipulating others' perception, thus acting as a bully or a provocateur. The behavior is typically for the troll's amusement, or to achieve a specific result such as disrupting a rival's online activities or purposefully causing confusion or harm to other people. Trolling behaviors involve tactical aggression to incite emotional responses, which can adversely affect the target's well-being.

Disinformation is false information deliberately spread to deceive people. Disinformation is an orchestrated adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals. Disinformation is implemented through attacks that "weaponize multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value judgements—to exploit and amplify culture wars and other identity-driven controversies."

<span class="mw-page-title-main">Media manipulation</span> Techniques in which partisans create an image that favours their interests

Media manipulation refers to orchestrated campaigns in which actors exploit the distinctive features of broadcasting mass communications or digital media platforms to mislead, misinform, or create a narrative that advance their interests and agendas.

Russian web brigades, also called Russian trolls, Russian bots, Kremlinbots, or Kremlin trolls are state-sponsored anonymous Internet political commentators and trolls linked to the Government of Russia. Participants report that they are organized into teams and groups of commentators that participate in Russian and international political blogs and Internet forums using sockpuppets, social bots, and large-scale orchestrated trolling and disinformation campaigns to promote pro-Vladimir Putin and pro-Russian propaganda.

Philip N. Howard is a sociologist and communication researcher who studies the impact of information technologies on democracy and social inequality. He studies how new information technologies are used in both civic engagement and social control in countries around the world. He is Professor of Internet Studies at the Oxford Internet Institute and Balliol College at the University of Oxford. He was Director of the Oxford Internet Institute from March 2018 to March 26, 2021. He is the author of ten books, including New Media Campaigns and The Managed Citizen, The Digital Origins of Dictatorship and Democracy, and Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up. His latest book is Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives.

<span class="mw-page-title-main">Mustafa Al-Bassam</span> British computer hacker and co-founder of LulzSec

Mustafa Al-Bassam is a British computer security researcher, hacker, and co-founder of Celestia Labs. Al-Bassam co-founded the hacker group LulzSec in 2011, which was responsible for several high profile breaches. He later went on to co-found Chainspace, a company implementing a smart contract platform, which was acquired by Facebook in 2019. In 2021, Al-Bassam graduated from University College London, completing a PhD in computer science with a thesis on Securely Scaling Blockchain Base Layers. In 2016, Forbes listed Al-Bassam as one of the 30 Under 30 entrepreneurs in technology.

The Joint Threat Research Intelligence Group (JTRIG) is a unit of the Government Communications Headquarters (GCHQ), the British intelligence agency. The existence of JTRIG was revealed as part of the global surveillance disclosures in documents leaked by the former National Security Agency contractor Edward Snowden.

State-sponsored Internet propaganda is Internet manipulation and propaganda that is sponsored by a state.

Peñabots is the nickname for automated social media accounts allegedly used by the Mexican government of Enrique Peña Nieto and the PRI political party to keep unfavorable news from reaching the Mexican public. Peñabot accusations are related to the broader issue of fake news in the 21st century.

<span class="mw-page-title-main">Social media in the 2016 United States presidential election</span>

Social media played an important role in shaping the course of events surrounding the 2016 United States presidential election. It facilitated greater voter interaction with the political climate; unlike traditional media, social media gave people the ability to create, comment on, and share content related to the election.

Fake news websites are websites on the Internet that deliberately publish fake news—hoaxes, propaganda, and disinformation purporting to be real news—often using social media to drive web traffic and amplify their effect. Unlike news satire, fake news websites deliberately seek to be perceived as legitimate and taken at face value, often for financial or political gain. Such sites have promoted political falsehoods in India, Germany, Indonesia and the Philippines, Sweden, Mexico, Myanmar, and the United States. Many sites originate in, or are promoted by, Russia, or North Macedonia among others. Some media analysts have seen them as a threat to democracy. In 2016, the European Parliament's Committee on Foreign Affairs passed a resolution warning that the Russian government was using "pseudo-news agencies" and Internet trolls as disinformation propaganda to weaken confidence in democratic values.

A troll farm or troll factory is an institutionalised group of internet trolls that seeks to interfere in political opinions and decision-making.

Fake news websites target United States audiences by using disinformation to create or inflame controversial topics such as the 2016 election. Most fake news websites target readers by impersonating or pretending to be real news organizations, which can lead to legitimate news organizations further spreading their message. Most notable in the media are the many websites that made completely false claims about political candidates such as Hillary Clinton and Donald Trump, as part of a larger campaign to gain viewers and ad revenue or spread disinformation. Additionally, satire websites have received criticism for not properly notifying readers that they are publishing false or satirical content, since many readers have been duped by seemingly legitimate articles.

A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.

<span class="mw-page-title-main">Fake news</span> False or misleading information presented as real

Fake news or information disorder is false or misleading information presented as news. Fake news often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Although false news has always been spread throughout history, the term fake news was first used in the 1890s when sensational reports in newspapers were common. Nevertheless, the term does not have a fixed definition and has been applied broadly to any type of false information presented as news. It has also been used by high-profile people to apply to any news unfavorable to them. Further, disinformation involves spreading false information with harmful intent and is sometimes generated and propagated by hostile foreign actors, particularly during elections. In some definitions, fake news includes satirical articles misinterpreted as genuine, and articles that employ sensationalist or clickbait headlines that are not supported in the text. Because of this diversity of types of false news, researchers are beginning to favour information disorder as a more neutral and informative term.

<span class="mw-page-title-main">Propaganda through media</span> Use of media for propaganda

Propaganda is a form of persuasion that is often used in media to further some sort of agenda, such as a personal, political, or business agenda, by evoking an emotional or obligable response from the audience. It includes the deliberate sharing of realities, views, and philosophies intended to alter behavior and stimulate people to act.

<span class="mw-page-title-main">2017 Macron e-mail leaks</span> Release of 21,075 emails associated with the French presidential campaign of Emmanuel Macron

The 2017 Macron e-mail leaks were leaks of more than 20,000 e-mails related to the campaign of Emmanuel Macron during the 2017 French presidential elections, two days before the final vote. The leaks garnered an abundance of media attention due to how quickly news of the leak spread throughout the Internet, aided in large part by bots and spammers and drew accusations that the government of Russia under Vladimir Putin was responsible. The e-mails were shared by WikiLeaks and several American alt-right activists through social media sites like Twitter, Facebook, and 4chan.

Fake news in the Philippines refers to the general and widespread misinformation or disinformation in the country by various actors. It has been problematic in the Philippines where social media and alike plays a key role in influencing topics and information ranging from politics, health, belief, religion, current events, aid, lifestyle, elections and others. Recently, it has evolved to be a rampant issue against the COVID-19 pandemic in the Philippines and the 2022 Philippine general election.

Disinformation attacks are strategic deception campaigns involving media manipulation and internet manipulation, to disseminate misleading information, aiming to confuse, paralyze, and polarize an audience. Disinformation can be considered an attack when it occurs as an adversarial narrative campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value-laden judgements—to exploit and amplify identity-driven controversies. Disinformation attacks use media manipulation to target broadcast media like state-sponsored TV channels and radios. Due to the increasing use of internet manipulation on social media, they can be considered a cyber threat. Digital tools such as bots, algorithms, and AI technology, along with human agents including influencers, spread and amplify disinformation to micro-target populations on online platforms like Instagram, Twitter, Google, Facebook, and YouTube.

Team Jorge is the name given to an outfit of Israeli contractors specialized in the use of malign cyber activities including hacking, sabotage, and bot farm-run social media disinformation campaigns to manipulate the outcomes of elections. One of the organization's primary tools is a software package called Advanced Impact Media Solutions, or Aims.

References

  1. Woolley, Samuel; Howard, Philip N. (2019). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press. ISBN   978-0190931414.
  2. Diaz Ruiz, Carlos (2023-10-30). "Disinformation on digital media platforms: A market-shaping approach". New Media & Society. doi: 10.1177/14614448231207644 . ISSN   1461-4448. S2CID   264816011.
  3. Marchal, Nahema; Neudert, Lisa-Maria (2019). "Polarisation and the use of technology in political campaigns and communication" (PDF). European Parliamentary Research Service.
  4. Kreiss, Daniel; McGregor, Shannon C (2023-04-11). "A review and provocation: On polarization and platforms". New Media & Society. 26: 556–579. doi: 10.1177/14614448231161880 . ISSN   1461-4448. S2CID   258125103.
  5. 1 2 3 Diaz Ruiz, Carlos; Nilsson, Tomas (2023). "Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies". Journal of Public Policy & Marketing. 42 (1): 18–35. doi:10.1177/07439156221103852. ISSN   0743-9156. S2CID   248934562.
  6. Di Domenico, Giandomenico; Ding, Yu (2023-10-23). "Between Brand attacks and broader narratives: how direct and indirect misinformation erode consumer trust". Current Opinion in Psychology. 54: 101716. doi: 10.1016/j.copsyc.2023.101716 . ISSN   2352-250X. PMID   37952396. S2CID   264474368.
  7. Castells, Manuel (2015-06-04). Networks of Outrage and Hope: Social Movements in the Internet Age. John Wiley & Sons. ISBN   9780745695792 . Retrieved 4 February 2017.
  8. "Condemnation over Egypt's internet shutdown". Financial Times. Retrieved 4 February 2017.
  9. "Net neutrality wins in Europe – a victory for the internet as we know it". ZME Science. 31 August 2016. Retrieved 4 February 2017.
  10. Thompson, Paul (2004). Trevisani, Dawn A.; Sisti, Alex F. (eds.). Cognitive hacking and intelligence and security informatics (PDF). Defense and Security. Enabling Technologies for Simulation Science VIII. Vol. 5423. Orlando, Florida, United States. pp. 142–151. Bibcode:2004SPIE.5423..142T. doi:10.1117/12.554454. S2CID   18907972. Archived from the original (PDF) on 5 February 2017. Retrieved 4 February 2017.
  11. Cybenko, G.; Giani, A.; Thompson, P. (2002). "Cognitive hacking: a battle for the mind". Computer. 35 (8): 50–56. doi:10.1109/mc.2002.1023788 . Retrieved 2023-11-02.
  12. Bastick, Zach (2021). "Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation". Computers in Human Behavior. 116 (106633): 106633. doi: 10.1016/j.chb.2020.106633 .
  13. Berger, Jonah; Milkman, Katherine L (April 2012). "What Makes Online Content Viral?" (PDF). Journal of Marketing Research. 49 (2): 192–205. doi:10.1509/jmr.10.0353. S2CID   29504532.
  14. Hoff, Carsten Klotz von (6 April 2012). "Manipulation 2.0 – Meinungsmache via Facebook" (in German). Der Freitag. Retrieved 4 February 2017.
  15. Golda, Christopher P. (2015). Informational Social Influence and the Internet: Manipulation in a Consumptive Society . Retrieved 4 February 2017.
  16. 1 2 "Moderators: New subreddit feature – comment scores may be hidden for a defined time period after posting • /r/modnews". reddit. 29 April 2013. Retrieved 4 February 2017.
  17. 1 2 3 Solon, Olivia (27 April 2017). "Facebook admits: governments exploited us to spread propaganda". The Guardian. Retrieved 30 April 2017.
  18. "Die Scheinwelt von Facebook & Co. (German-language documentary by the ZDF)" (in German). Retrieved 4 February 2017.
  19. 1 2 3 "Ich habe nur gezeigt, dass es die Bombe gibt". Das Magazin. 3 December 2016. Retrieved 30 April 2017.
  20. Beuth, Patrick (6 December 2016). "US-Wahl: Big Data allein entscheidet keine Wahl". Die Zeit. Retrieved 30 April 2017.
  21. "The Data That Turned the World Upside Down". Motherboard. 2017-01-28. Retrieved 30 April 2017.
  22. Sacasas, L. M. (2020). "The Analog City and the Digital City". The New Atlantis (61): 3–18. ISSN   1543-1215. JSTOR   26898497.
  23. World Trends in Freedom of Expression and Media Development Global Report 2017/2018. UNESCO. 2018. p. 202.
  24. Kalinina, Anna V.; Yusupova, Elena E.; Voevoda, Elena V. (2019-05-18). "Means of Influence on Public Opinion in Political Context: Speech Manipulation in the Media". Media Watch. 10 (2). doi:10.15655/mw/2019/v10i2/49625. ISSN   2249-8818. S2CID   182112133.
  25. Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly 80 (S1): 298–320.
  26. Pariser, Eli. 2011. The filter bubble: What the Internet is hiding from you. Penguin UK. Available at https://books.google.co.uk/?hl=en&lr=&oi=fnd&pg=PT3&dq=eli+pariser+filter&ots=g3PrCprRV2&sig=_FI8GISLrm3WNoMKMlqSTJNOFw Accessed 20 May 2017.
  27. Grömping, Max (2014). "Echo Chambers". Asia Pacific Media Educator. 24: 39–59. doi:10.1177/1326365X14539185. S2CID   154399136.
  28. Gentzkow, Matthew, and Jesse M. Shapiro. 2011. Ideological segregation online and offline. The Quarterly Journal of Economics 126 (4): 1799–1839.
  29. Zuiderveen Borgesius, Frederik J., Damian Trilling, Judith Moeller, Balázs Bodó, Claes H. de Vreese, and Natali Helberger. 2016. Should We Worry about Filter Bubbles? Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2758126. Accessed 20 May 2017
  30. Bakshy, Eytan; Messing, Solomon; Adamic, Lada A. (2015-06-05). "Exposure to ideologically diverse news and opinion on Facebook". Science. 348 (6239): 1130–1132. Bibcode:2015Sci...348.1130B. doi: 10.1126/science.aaa1160 . ISSN   0036-8075. PMID   25953820. S2CID   206632821.
  31. Hargittai. 2015. Why doesn't Science publish important methods info prominently? Crooked Timber. Available at http://crookedtimber.org/2015/05/07/why-doesnt-science-publish-important-methods-info-prominently/. Accessed 20 May 2017.
  32. "Snowden leaks: GCHQ 'attacked Anonymous' hackers". BBC News. BBC. 5 February 2014. Retrieved 7 February 2014.
  33. 1 2 3 4 5 6 7 8 "Snowden Docs: British Spies Used Sex and 'Dirty Tricks'". NBC News. 7 February 2014. Retrieved 7 February 2014.
  34. Glenn Greenwald (2014-02-24). "How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations". The Intercept.  contains the DISRUPTION Operational Playbook slide presentation by GCHQ
  35. Greenwald, Glenn (2014-02-24). "How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations". The Intercept. Retrieved 4 February 2017.
  36. Greenwald, Glenn and Andrew Fishman. Controversial GCHQ Unit Engaged in Domestic Law Enforcement, Online Propaganda, Psychology Research. The Intercept. 2015-06-22.
  37. 1 2 3 4 Shearlaw, Maeve (2 April 2015). "From Britain to Beijing: how governments manipulate the internet". The Guardian. Retrieved 4 February 2017.
  38. Chen, Adrian (2 June 2015). "The Agency". The New York Times. Retrieved 30 April 2017.
  39. Watts, Clint; Weisburd, Andrew (6 August 2016), "Trolls for Trump – How Russia Dominates Your Twitter Feed to Promote Lies (And, Trump, Too)", The Daily Beast , retrieved 24 November 2016
  40. "Russian propaganda effort likely behind flood of fake news that preceded election", PBS NewsHour , Associated Press, 25 November 2016, retrieved 26 November 2016
  41. "Russian propaganda campaign reportedly spread 'fake news' during US election", Nine News , Agence France-Presse, 26 November 2016, retrieved 26 November 2016
  42. "Information Operations and Facebook" (PDF). 27 April 2017. Archived from the original (PDF) on 8 January 2022. Retrieved 30 April 2017 via Il Sole 24 Ore.
  43. Reinbold, Fabian (2017-04-28). "Konzern dokumentiert erstmals Probleme: Geheimdienste nutzen Facebook zur Desinformation". SPIEGEL ONLINE. Retrieved 30 April 2017.
  44. "Report: Facebook will nicht mehr für Propaganda missbraucht werden" (in German). WIRED Germany. 28 April 2017. Retrieved 30 April 2017.
  45. "Facebook targets coordinated campaigns spreading fake news". CNET. Retrieved 30 April 2017.
  46. "Facebook, for the first time, acknowledges election manipulation". CBS News . 28 April 2017. Retrieved 30 April 2017.
  47. "How to Hack an Election". Bloomberg.com. Bloomberg. Retrieved 22 January 2017.
  48. "Man claims he rigged elections in most Latin American countries over 8 years". The Independent. 2 April 2016. Retrieved 22 January 2017.
  49. MacKinnon, Rebecca (2012). Consent of the networked: the world-wide struggle for Internet freedom. New York: Basic Books. ISBN   978-0-465-02442-1.
  50. "Ukraine's new online army in media war with Russia". BBC. Retrieved 4 February 2017.
  51. "Twitter pulls down bot network that pushed pro-Saudi talking points about disappeared journalist". NBC News. 19 October 2018.
  52. "Leaked data shows extent of UAE's meddling in France". MediaPart. 4 March 2023. Retrieved 4 March 2023.
  53. Gorwa, Robert; Guilbeault, Douglas (2018-08-10). "Unpacking the Social Media Bot: A Typology to Guide Research and Policy: Unpacking the Social Media Bot". Policy & Internet. arXiv: 1801.06863 . doi:10.1002/poi3.184. S2CID   51877148.
  54. "The World's Most Influential Person Is..." TIME. April 27, 2009. Archived from the original on April 28, 2009. Retrieved September 2, 2009.
  55. Heater, Brian (April 27, 2009). "4Chan Followers Hack Time's 'Influential' Poll". PC Magazine . Archived from the original on April 30, 2009. Retrieved April 27, 2009.
  56. Schonfeld, Erick (April 21, 2009). "4Chan Takes Over The Time 100". Washington Post. Retrieved April 27, 2009.
  57. "moot wins, Time Inc. loses « Music Machinery". Musicmachinery.com. April 27, 2009. Archived from the original on May 3, 2009. Retrieved September 2, 2009.
  58. Reddit Top Links. "Marble Cake Also the Game [PIC]". Buzzfeed.com. Archived from the original on April 15, 2009. Retrieved September 2, 2009.
  59. Maslin, Janet (31 May 2012). "'We Are Anonymous' by Parmy Olson". The New York Times. Retrieved 4 February 2017.
  60. "Debatte um "Social Bots": Blinder Aktionismus gegen die eigene Hilflosigkeit" (in German). WIRED Germany. 23 January 2017. Retrieved 4 February 2017.
  61. 1 2 "How technology is changing the way we think – Daniel Suarez, Jan Kalbitzer & Frank Rieger". YouTube . 7 December 2016. Retrieved 30 April 2017.
  62. Jamieson, Amber; Solon, Olivia (15 December 2016). "Facebook to begin flagging fake news in response to mounting criticism". The Guardian. Retrieved 4 February 2017.
  63. Finley, Klint (2017-04-04). "Tim Berners-Lee, Inventor of the Web, Plots a Radical Overhaul of His Creation". Wired. Retrieved 4 April 2017.
  64. Bradshaw, Samantha; Neudert, Lisa-Maria; Howard, Philip N. (2018). "Government Responses to Malicious Use of Social Media". Nato Stratcom Coe. ISBN   978-9934-564-31-4 via 20.
  65. Reuter, Markus (17 January 2017). "Hausfriedensbruch 4.0: Zutritt für Fake News und Bots strengstens verboten". Netzpolitik. Retrieved 24 October 2019.
  66. Bellezza, Marco; Frigerio, Filippo Frigerio (6 February 2018). "ITALY: First Attempt to (Self)Regulate the Online Political Propaganda".
  67. "Against information manipulation". Gouvernement.fr. Retrieved 24 October 2019.
  68. Menon, Praveen (2 April 2018). "Malaysia outlaws 'fake news'; sets jail of up to six years". Reuters. Retrieved 24 October 2019.
  69. Yeung, Jessie (17 August 2018). "Malaysia repeals controversial fake news law". CNN. Retrieved 24 October 2019.
  70. Schwartz, Arielle (16 May 2018). "Kenya signs bill criminalising fake news". Mail & Guardian. Retrieved 24 October 2019.
  71. "Bundestagsdebatte: Merkel schimpft über Internet-Trolle". Sueddeutsche.de (in German). Süddeutsche Zeitung. 1 November 2016. Retrieved 4 February 2017.