Part of a series on |
War (outline) |
---|
Disinformation is misleading content deliberately spread to deceive people, [1] [2] or to secure economic or political gain and which may cause public harm. [3] Disinformation is an orchestrated adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals. [4] Disinformation is implemented through attacks that "weaponize multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value judgements—to exploit and amplify culture wars and other identity-driven controversies." [5]
In contrast, misinformation refers to inaccuracies that stem from inadvertent error. [6] Misinformation can be used to create disinformation when known misinformation is purposefully and intentionally disseminated. [7] "Fake news" has sometimes been categorized as a type of disinformation, but scholars have advised not using these two terms interchangeably or using "fake news" altogether in academic writing since politicians have weaponized it to describe any unfavorable news coverage or information. [8]
The English word disinformation comes from the application of the Latin prefix dis- to information making the meaning "reversal or removal of information". The rarely used word had appeared with this usage in print at least as far back as 1887. [11] [12] [13] [14]
Some consider it a loan translation of the Russian дезинформация, transliterated as dezinformatsiya, [15] [1] [2] apparently derived from the title of a KGB black propaganda department. [16] [1] [17] [15] Soviet planners in the 1950s defined disinformation as "dissemination (in the press, on the radio, etc.) of false reports intended to mislead public opinion." [18]
Disinformation first made an appearance in dictionaries in 1985, specifically, Webster's New College Dictionary and the American Heritage Dictionary. [19] In 1986, the term disinformation was not defined in Webster's New World Thesaurus or New Encyclopædia Britannica. [15] After the Soviet term became widely known in the 1980s, native speakers of English broadened the term as "any government communication (either overt or covert) containing intentionally false and misleading material, often combined selectively with true information, which seeks to mislead and manipulate either elites or a mass audience." [2]
By 1990, use of the term disinformation had fully established itself in the English language within the lexicon of politics. [20] By 2001, the term disinformation had come to be known as simply a more civil phrase for saying someone was lying. [21] Stanley B. Cunningham wrote in his 2002 book The Idea of Propaganda that disinformation had become pervasively used as a synonym for propaganda. [22]
The Shorenstein Center at Harvard University defines disinformation research as an academic field that studies "the spread and impacts of misinformation, disinformation, and media manipulation," including "how it spreads through online and offline channels, and why people are susceptible to believing bad information, and successful strategies for mitigating its impact" [23] According to a 2023 research article published in New Media & Society, [4] disinformation circulates on social media through deception campaigns implemented in multiple ways including: astroturfing, conspiracy theories, clickbait, culture wars, echo chambers, hoaxes, fake news, propaganda, pseudoscience, and rumors.
Activities that operationalize disinformation campaigns online [4] | |||
---|---|---|---|
Term | Description | Term | Description |
Astroturfing | A centrally coordinated campaign that mimics grassroots activism by making participants pretend to be ordinary citizens | Fake news | Genre: The deliberate creation of pseudo-journalism Label: The instrumentalization of the term to delegitimize news media |
Conspiracy theories | Rebuttals of official accounts that propose alternative explanations in which individuals or groups act in secret | Greenwashing | Deceptive communication makes people believe that a company is environmentally responsible when it is not |
Clickbait | The deliberate use of misleading headlines and thumbnails to increase online traffic for profit or popularity | Propaganda | Organized mass communication, on a hidden agenda, and with a mission to conform belief and action by circumventing individual reasoning |
Culture wars | A phenomenon in which multiple groups of people, who hold entrenched values, attempt to steer public policy contentiously | Pseudoscience | Accounts that claim the explanatory power of science, borrow its language and legitimacy but diverge substantially from its quality criteria |
Doxxing | A form of online harassment that breaches privacy boundaries by releasing information intending physical and online harm to a target | Rumors | Unsubstantiated news stories that circulate while not corroborated or validated |
Echo chamber | An epistemic environment in which participants encounter beliefs and opinions that coincide with their own | Trolling | Networked groups of digital influencers that operate 'click armies' designed to mobilize public sentiment |
Hoax | News in which false facts are presented as legitimate | Urban legends | Moral tales featuring durable stories of intruders incurring boundary transgressions and their dire consequences |
Note: This is an adaptation of Table 2 from Disinformation on Digital Media Platforms: A Market Shaping Approach, by Carlos Diaz Ruiz, used under CC BY 4.0 / Adapted from the original. |
In order to distinguish between similar terms, including misinformation and malinformation, scholars collectively agree on the definitions for each term as follows: (1) disinformation is the strategic dissemination of false information with the intention to cause public harm; [24] (2) misinformation represents the unintentional spread of false information; and (3) malinformation is factual information disseminated with the intention to cause harm, [25] [26] these terms are abbreviated 'DMMI'. [27]
In 2019, Camille François devised the "ABC" framework of understanding different modalities of online disinformation:
In 2020, the Brookings Institution proposed amending this framework to include Distribution, defined by the "technical protocols that enable, constrain, and shape user behavior in a virtual space". [29] Similarly, the Carnegie Endowment for International Peace proposed adding Degree ("distribution of the content ... and the audiences it reaches") and Effect ("how much of a threat a given case poses"). [30]
Whether and to what degree disinformation and propaganda overlap is subject to debate. Some (like U.S. Department of State) define propaganda as the use of non-rational arguments to either advance or undermine a political ideal, and use disinformation as an alternative name for undermining propaganda. [31] While others consider them to be separate concepts altogether. [32] One popular distinction holds that disinformation also describes politically motivated messaging designed explicitly to engender public cynicism, uncertainty, apathy, distrust, and paranoia, all of which disincentivize citizen engagement and mobilization for social or political change. [18]
Disinformation is the label often given to foreign information manipulation and interference (FIMI). [33] [34] Studies on disinformation are often concerned with the content of activity whereas the broader concept of FIMI is more concerned with the "behaviour of an actor" that is described through the military doctrine concept of tactics, techniques, and procedures (TTPs). [33]
Disinformation is primarily carried out by government intelligence agencies, but has also been used by non-governmental organizations and businesses. [35] Front groups are a form of disinformation, as they mislead the public about their true objectives and who their controllers are. [36] Most recently, disinformation has been deliberately spread through social media in the form of "fake news", disinformation masked as legitimate news articles and meant to mislead readers or viewers. [37] Disinformation may include distribution of forged documents, manuscripts, and photographs, or spreading dangerous rumours and fabricated intelligence. Use of these tactics can lead to blowback, however, causing such unintended consequences such as defamation lawsuits or damage to the dis-informer's reputation. [36]
The examples and perspective in this section may not represent a worldwide view of the subject.(October 2023) |
Russian disinformation campaigns have occurred in many countries. [41] [42] [43] [44] For example, disinformation campaigns led by Yevgeny Prigozhin have been reported in several African countries. [45] [46] Russia, however, denies that it uses disinformation to influence public opinion. [47]
Often Russian campaigns aim to disrupt domestic politics within Europe and the United States in an attempt to weaken the West due to its long-standing commitment to fight back against "Western imperialism" and shift the balance of world power to Russia and her allies. According to the Voice of America, Russia seeks to promote American isolationism, border security concerns and racial tensions within the United States through its disinformation campaigns. [48] [49] [50]The United States Intelligence Community appropriated use of the term disinformation in the 1950s from the Russian dezinformatsiya, and began to use similar strategies [61] [62] during the Cold War and in conflict with other nations. [17] The New York Times reported in 2000 that during the CIA's effort to substitute Mohammed Reza Pahlavi for then-Prime Minister of Iran Mohammad Mossadegh, the CIA placed fictitious stories in the local newspaper. [17] Reuters documented how, subsequent to the 1979 Soviet Union invasion of Afghanistan during the Soviet–Afghan War, the CIA put false articles in newspapers of Islamic-majority countries, inaccurately stating that Soviet embassies had "invasion day celebrations". [17] Reuters noted a former U.S. intelligence officer said they would attempt to gain the confidence of reporters and use them as secret agents, to affect a nation's politics by way of their local media. [17]
In October 1986, the term gained increased currency in the U.S. when it was revealed that two months previously, the Reagan Administration had engaged in a disinformation campaign against then-leader of Libya, Muammar Gaddafi. [63] White House representative Larry Speakes said reports of a planned attack on Libya as first broken by The Wall Street Journal on August 25, 1986, were "authoritative", and other newspapers including The Washington Post then wrote articles saying this was factual. [63] U.S. State Department representative Bernard Kalb resigned from his position in protest over the disinformation campaign, and said: "Faith in the word of America is the pulse beat of our democracy." [63]
The executive branch of the Reagan administration kept watch on disinformation campaigns through three yearly publications by the Department of State: Active Measures: A Report on the Substance and Process of Anti-U.S. Disinformation and Propaganda Campaigns (1986); Report on Active Measures and Propaganda, 1986–87 (1987); and Report on Active Measures and Propaganda, 1987–88 (1989). [61]
According to a report by Reuters, the United States ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law. [64] Reuters said the ChinaAngVirus disinformation campaign was designed to "counter what it perceived as China's growing influence in the Philippines" and was prompted by the "[fear] that China's COVID diplomacy and propaganda could draw other Southeast Asian countries, such as Cambodia and Malaysia, closer to Beijing". [64] The campaign was also described as "payback for Beijing's efforts to blame Washington for the pandemic". [65] The campaign primarily targeted people in the Philippines and used a social media hashtag for "China is the virus" in Tagalog. [64] The campaign ran from 2020 to mid-2021. [64] The primary contractor for the U.S. military on the project was General Dynamics IT, which received $493 million for its role. [64]
Pope Francis condemned disinformation in a 2016 interview, after being made the subject of a fake news website during the 2016 U.S. election cycle which falsely claimed that he supported Donald Trump. [66] [67] [68] He said the worst thing the news media could do was spread disinformation. He said the act was a sin, [69] [70] comparing those who spread disinformation to individuals who engage in coprophilia. [71] [72]
In a contribution to the 2014 book Military Ethics and Emerging Technologies, writers David Danks and Joseph H. Danks discuss the ethical implications in using disinformation as a tactic during information warfare. [73] They note there has been a significant degree of philosophical debate over the issue as related to the ethics of war and use of the technique. [73] The writers describe a position whereby the use of disinformation is occasionally allowed, but not in all situations. [73] Typically the ethical test to consider is whether the disinformation was performed out of a motivation of good faith and acceptable according to the rules of war. [73] By this test, the tactic during World War II of putting fake inflatable tanks in visible locations on the Pacific Islands in order to falsely present the impression that there were larger military forces present would be considered as ethically permissible. [73] Conversely, disguising a munitions plant as a healthcare facility in order to avoid attack would be outside the bounds of acceptable use of disinformation during war. [73]
Research related to disinformation studies is increasing as an applied area of inquiry. [74] [75] The call to formally classify disinformation as a cybersecurity threat is made by advocates due to its increase in social networking sites. [76] Despite the proliferation of social media websites, Facebook and Twitter showed the most activity in terms of active disinformation campaigns. Techniques reported on included the use of bots to amplify hate speech, the illegal harvesting of data, and paid trolls to harass and threaten journalists. [77]
Whereas disinformation research focuses primarily on how actors orchestrate deceptions on social media, primarily via fake news, new research investigates how people take what started as deceptions and circulate them as their personal views. [5] As a result, research shows that disinformation can be conceptualized as a program that encourages engagement in oppositional fantasies (i.e., culture wars), through which disinformation circulates as rhetorical ammunition for never-ending arguments. [5] As disinformation entangles with culture wars, identity-driven controversies constitute a vehicle through which disinformation disseminates on social media. This means that disinformation thrives, not despite raucous grudges but because of them. The reason is that controversies provide fertile ground for never-ending debates that solidify points of view. [5]
Scholars have pointed out that disinformation is not only a foreign threat as domestic purveyors of disinformation are also leveraging traditional media outlets such as newspapers, radio stations, and television news media to disseminate false information. [78] Current research suggests right-wing online political activists in the United States may be more likely to use disinformation as a strategy and tactic. [79] Governments have responded with a wide range of policies to address concerns about the potential threats that disinformation poses to democracy, however, there is little agreement in elite policy discourse or academic literature as to what it means for disinformation to threaten democracy, and how different policies might help to counter its negative implications. [80]
There is a broad consensus amongst scholars that there is a high degree of disinformation, misinformation, and propaganda online; however, it is unclear to what extent such disinformation has on political attitudes in the public and, therefore, political outcomes. [81] This conventional wisdom has come mostly from investigative journalists, with a particular rise during the 2016 U.S. election: some of the earliest work came from Craig Silverman at Buzzfeed News. [82] Cass Sunstein supported this in #Republic, arguing that the internet would become rife with echo chambers and informational cascades of misinformation leading to a highly polarized and ill-informed society. [83]
Research after the 2016 election found: (1) for 14 percent of Americans social media was their "most important" source of election news; 2) known false news stories "favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times"; 3) the average American adult saw fake news stories, "with just over half of those who recalled seeing them believing them"; and 4) people are more likely to "believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks." [84] Correspondingly, whilst there is wide agreement that the digital spread and uptake of disinformation during the 2016 election was massive and very likely facilitated by foreign agents, there is an ongoing debate on whether all this had any actual effect on the election. For example, a double blind randomized-control experiment by researchers from the London School of Economics (LSE), found that exposure to online fake news about either Trump or Clinton had no significant effect on intentions to vote for those candidates. Researchers who examined the influence of Russian disinformation on Twitter during the 2016 US presidential campaign found that exposure to disinformation was (1) concentrated among a tiny group of users, (2) primarily among Republicans, and (3) eclipsed by exposure to legitimate political news media and politicians. Finally, they find "no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior." [85] As such, despite its mass dissemination during the 2016 Presidential Elections, online fake news or disinformation probably did not cost Hillary Clinton the votes needed to secure the presidency. [86]
Research on this topic remains inconclusive, for example, misinformation appears not to significantly change political knowledge of those exposed to it. [87] There seems to be a higher level of diversity of news sources that users are exposed to on Facebook and Twitter than conventional wisdom would dictate, as well as a higher frequency of cross-spectrum discussion. [88] [89] Other evidence has found that disinformation campaigns rarely succeed in altering the foreign policies of the targeted states. [90]
Research is also challenging because disinformation is meant to be difficult to detect and some social media companies have discouraged outside research efforts. [91] For example, researchers found disinformation made "existing detection algorithms from traditional news media ineffective or not applicable...[because disinformation] is intentionally written to mislead readers...[and] users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy." [91] Facebook, the largest social media company, has been criticized by analytical journalists and scholars for preventing outside research of disinformation. [92] [93] [94] [95]
Researchers have criticized the framing of disinformation as being limited to technology platforms, removed from its wider political context and inaccurately implying that the media landscape was otherwise well-functioning. [96] "The field possesses a simplistic understanding of the effects of media technologies; overemphasizes platforms and underemphasizes politics; focuses too much on the United States and Anglocentric analysis; has a shallow understanding of political culture and culture in general; lacks analysis of race, class, gender, and sexuality as well as status, inequality, social structure, and power; has a thin understanding of journalistic processes; and, has progressed more through the exigencies of grant funding than the development of theory and empirical findings." [97]
Alternative perspectives have been proposed:
The research literature on how disinformation spreads is growing. [81] Studies show that disinformation spread in social media can be classified into two broad stages: seeding and echoing. [5] "Seeding," when malicious actors strategically insert deceptions, like fake news, into a social media ecosystem, and "echoing" is when the audience disseminates disinformation argumentatively as their own opinions often by incorporating disinformation into a confrontational fantasy.
Studies show four main methods of seeding disinformation online: [81]
Disinformation is amplified online due to malpractice concerning online advertising, especially the machine-to-machine interactions of real-time bidding systems. [112] Online advertising technologies have been used to amplify disinformation due to the financial incentives and monetization of user-generated content and fake news. [100] The lax oversight over the online advertising market can be used to amplify disinformation, including the use of dark money used for political advertising. [113]
Media manipulation refers to orchestrated campaigns in which actors exploit the distinctive features of broadcasting mass communications or digital media platforms to mislead, misinform, or create a narrative that advance their interests and agendas.
Fact-checking is the process of verifying the factual accuracy of questioned reporting and statements. Fact-checking can be conducted before or after the text or content is published or otherwise disseminated. Internal fact-checking is such checking done in-house by the publisher to prevent inaccurate content from being published; when the text is analyzed by a third party, the process is called external fact-checking.
Misinformation is incorrect or misleading information. Misinformation can exist without specific malicious intent; disinformation is distinct in that it is deliberately deceptive and propagated. Misinformation can include inaccurate, incomplete, misleading, or false information as well as selective or half-truths. In January 2024, the World Economic Forum identified misinformation and disinformation, propagated by both internal and external interests, to "widen societal and political divides" as the most severe global risks within the next two years.
Philip N. Howard is a sociologist and communication researcher who studies the impact of information technologies on democracy and social inequality. He studies how new information technologies are used in both civic engagement and social control in countries around the world. He is Professor of Internet Studies at the Oxford Internet Institute and Balliol College at the University of Oxford. He was Director of the Oxford Internet Institute from March 2018 to March 26, 2021. He is the author of ten books, including New Media Campaigns and The Managed Citizen, The Digital Origins of Dictatorship and Democracy, and Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up. His latest book is Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives.
State-sponsored Internet propaganda is Internet manipulation and propaganda that is sponsored by a state. States have used the Internet, particularly social media to influence elections, sow distrust in institutions, spread rumors, spread disinformation, typically using bots to create and spread contact. Propaganda is used internally to control populations, and externally to influence other societies.
The propaganda of the Russian Federation promotes views, perceptions or agendas of the government. The media include state-run outlets and online technologies, and may involve using "Soviet-style 'active measures' as an element of modern Russian 'political warfare'". Notably, contemporary Russian propaganda promotes the cult of personality of Vladimir Putin and positive views of Soviet history. Russia has established a number of organizations, such as the Presidential Commission of the Russian Federation to Counter Attempts to Falsify History to the Detriment of Russia's Interests, the Russian web brigades, and others that engage in political propaganda to promote the views of the Russian government.
Post-truth politics, also described as post-factual politics or post-reality politics, amidst varying academic and dictionary definitions of the term, refer to a recent historical period where political culture is marked by public anxiety about what claims can be publicly accepted facts.
Fake news websites are websites on the Internet that deliberately publish fake news—hoaxes, propaganda, and disinformation purporting to be real news—often using social media to drive web traffic and amplify their effect. Unlike news satire, these websites deliberately seek to be perceived as legitimate and taken at face value, often for financial or political gain. Fake news websites monetize their content by exploiting the vulnerabilities of programmatic ad trading, which is a type of online advertising in which ads are traded through machine-to-machine auction in a real-time bidding system.
Fake news or information disorder is false or misleading information claiming the aesthetics and legitimacy of news. Fake news often has the aim of damaging the reputation of a person or entity, or making money through advertising revenue. Although false news has always been spread throughout history, the term fake news was first used in the 1890s when sensational reports in newspapers were common. Nevertheless, the term does not have a fixed definition and has been applied broadly to any type of false information presented as news. It has also been used by high-profile people to apply to any news unfavorable to them. Further, disinformation involves spreading false information with harmful intent and is sometimes generated and propagated by hostile foreign actors, particularly during elections. In some definitions, fake news includes satirical articles misinterpreted as genuine, and articles that employ sensationalist or clickbait headlines that are not supported in the text. Because of this diversity of types of false news, researchers are beginning to favour information disorder as a more neutral and informative term.
Internet manipulation is the co-optation of online digital technologies, including algorithms, social bots, and automated scripts, for commercial, social, military, or political purposes. Internet and social media manipulation are the prime vehicles for spreading disinformation due to the importance of digital platforms for media consumption and everyday communication. When employed for political purposes, internet manipulation may be used to steer public opinion, polarise citizens, circulate conspiracy theories, and silence political dissidents. Internet manipulation can also be done for profit, for instance, to harm corporate or political adversaries and improve brand reputation. Internet manipulation is sometimes also used to describe the selective enforcement of Internet censorship or selective violations of net neutrality.
Social media use in politics refers to the use of online social media platforms in political processes and activities. Political processes and activities include all activities that pertain to the governance of a country or area. This includes political organization, global politics, political corruption, political parties, and political values. The media's primary duty is to present us with information and alert us when events occur. This information may affect what we think and the actions we take. The media can also place pressure on the government to act by signaling a need for intervention or showing that citizens want change
Fake news and similar false information is fostered and spread across India through word of mouth, traditional media and more recently through digital forms of communication such as edited videos, websites, blogs, memes, unverified advertisements and social media propagated rumours. Fake news spread through social media in the country has become a serious problem, with the potential of it resulting in mob violence, as was the case where at least 20 people were killed in 2018 as a result of misinformation circulated on social media.
The firehose of falsehood, also known as firehosing, is a propaganda technique in which a large number of messages are broadcast rapidly, repetitively, and continuously over multiple channels without regard for truth or consistency. An outgrowth of Soviet propaganda techniques, the firehose of falsehood is a contemporary model for Russian propaganda under Russian President Vladimir Putin.
The Grayzone is an American news website and blog characterized as fringe and far-left by numerous sources. It was founded and edited by American journalist Max Blumenthal. The website was initially founded as The Grayzone Project and was affiliated with AlterNet until early 2018.
Joan Donovan is an American social science researcher, sociologist, and academic noted for her research on disinformation. She is the founder of the nonprofit, The Critical Internet Studies Institute (CISI). Since 2023, she is an assistant professor at the College of Communication at Boston University.
Fake news in the Philippines refers to the general and widespread misinformation or disinformation in the country by various actors. It has been problematic in the Philippines where social media and alike plays a key role in influencing topics and information ranging from politics, health, belief, religion, current events, aid, lifestyle, elections and others. Recently, it has evolved to be a rampant issue against the COVID-19 pandemic in the Philippines and the 2022 Philippine general election.
Information laundering or disinformation laundering is the surfacing of news, false or otherwise, from unverified sources into the mainstream. By advancing disinformation to make it accepted as ostensibly legitimate information, information laundering resembles money laundering—the transforming of illicit funds into ostensibly legitimate funds.
Disinformation attacks are strategic deception campaigns involving media manipulation and internet manipulation, to disseminate misleading information, aiming to confuse, paralyze, and polarize an audience. Disinformation can be considered an attack when it occurs as an adversarial narrative campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value-laden judgements—to exploit and amplify identity-driven controversies. Disinformation attacks use media manipulation to target broadcast media like state-sponsored TV channels and radios. Due to the increasing use of internet manipulation on social media, they can be considered a cyber threat. Digital tools such as bots, algorithms, and AI technology, along with human agents including influencers, spread and amplify disinformation to micro-target populations on online platforms like Instagram, Twitter, Google, Facebook, and YouTube.
Russian disinformation campaigns have occurred in many countries. For example, disinformation campaigns led by Yevgeny Prigozhin have been reported in several African countries. Russia, however, denies that it uses disinformation to influence public opinion.
{{cite web}}
: CS1 maint: date and year (link)Strategic communications advisor working across a broad range of policy areas for public and multilateral organisations. Counter-disinformation specialist and published author on foreign information manipulation and interference (FIMI).
In fact, the word disinformation is a cognate for the Russian dezinformatsia, taken from the name of a division of the KGB devoted to black propaganda.
{{cite book}}
: CS1 maint: location missing publisher (link)The Kremlin's effectiveness in seeding its preferred vaccine narratives among African audiences underscores its wider concerted effort to undermine and discredit Western powers by pushing or tapping into anti-Western sentiment across the continent.
The mission to increase Russian influence on the continent is being led by Yevgeny Prigozhin, a businessman based in St Petersburg who is a close ally of the Russian president, Vladimir Putin. One aim is to 'strong-arm' the US and the former colonial powers the UK and France out of the region. Another is to see off 'pro-western' uprisings, the documents say.
Moscow adamantly denies using disinformation to influence Western public opinion and tends to label accusations of either overt or covert threats as 'Russophobia.'
The group allegedly responsible is known as Taizi Flood, which has been previously associated with China's Ministry of Public Security, researchers say.
The accounts sometimes amplified or repeated content from the Chinese influence campaign Spamouflage, which was first identified in 2019 and linked to an arm of the Ministry of Public Security.
{{cite book}}
: CS1 maint: location missing publisher (link)