Dead Internet theory

Last updated

The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation, minimising organic human activity to manipulate the population. [1] [2] [3] [4] [5] Proponents of the theory believe these bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. [5] [6] Some proponents of the theory accuse government agencies of using bots to manipulate public perception. [2] [5] The date given for this "death" was generally around 2016 or 2017. [2] [7] [8]

Contents

The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature does not support the full theory. [2] [4] [9] Caroline Busta, founder of the media platform New Models, was quoted in an article in The Atlantic calling much of the dead Internet theory a "paranoid fantasy", even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea". [2] In an article in The New Atlantis , Robert Mariani called the theory a mix between a genuine conspiracy theory and a creepypasta. [5]

Origins and development

The dead Internet theory's exact origin is difficult to pinpoint, but it most likely emerged from 4chan or Wizardchan as a theoretical concept in the late 2010s or early 2020s. [2] In 2021, a thread titled "Dead Internet Theory: Most Of The Internet Is Fake" was published on the forum Agora Road's Macintosh Cafe, marking the term's spread beyond these initial imageboards. [2] [10] But discussion and debate of the theory have been prevalent in online forums, technology conferences, and academic circles, possibly since earlier. [2] It was inspired by concerns about the Internet's increasing complexity, dependence on fragile infrastructure, potential cyberattack vulnerabilities, and, most importantly, the exponential increase in artificial intelligence capabilities and use. [11]

The theory gained traction in discussions among technology enthusiasts, researchers, and futurists who sought to explore the risks associated with our reliance on the Internet. The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels. [2] It gained more mainstream attention with an article in The Atlantic titled "Maybe You Missed It, but the Internet 'Died' Five Years Ago". [2] This article has been widely cited by other articles on the topic. [12] [10]

Claims

The dead Internet theory has two main components: that bots have displaced human activity on the Internet and that actors are employing these bots to manipulate the human population. [13] The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" in New York magazine. [2] [14] [13] The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally employing these bots to manipulate the human population for a variety of reasons. [2] [13] In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population." [2] [5]

Evidence

Large language models

Generative pre-trained transformers (GPTs) are a class of large language models (LLMs) that employ artificial neural networks to produce human-like content. [15] [16] The first of these was developed by OpenAI. [17] These models have created significant controversy. For example, Timothy Shoup of the Copenhagen Institute for Futures Studies has said, "in the scenario where GPT-3 'gets loose', the internet would be completely unrecognizable". [18] He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030. [18] These predictions have been used as evidence for the dead internet theory. [12] In 2024, Google reported that its search results were being inundated with websites that "feel like they were created for search engines instead of people." [19] In correspondence with Gizmodo, a Google spokesperson acknowledged the role of generative AI in the rapid proliferation of such content and that it could displace more valuable human-made alternatives. [20]

ChatGPT

ChatGPT is an AI chatbot whose 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before. [7] [21] Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT puts the power of AI in the hands of average internet users. [7] [21] This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content. [7] [21] [22]

2016 Imperva bot traffic report

In 2016, the security firm Imperva released a report on bot traffic and found that bots were responsible for 52% of web traffic, the first time they had surpassed human traffic. [23] This report has been used as evidence in reports on the dead Internet theory. [2]

Reddit

An image posted on many subreddits as protest during the blackout. Reddit is killing third-party applications.svg
An image posted on many subreddits as protest during the blackout.

In the past, the social media site Reddit allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction. [22] Controversially, Reddit moved to charge for access to its user dataset. Companies training AI will likely continue to use this data for training future AI. As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts. [22] Professor Toby Walsh of the University of New South Wales said in an interview with Business Insider that training the next generation of AI on content created by previous generations could cause the content to suffer. [22] University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory. [22]

Twitter

"I hate texting" tweets

Several Twitter accounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me". [2] These posts received tens of thousands of likes, and many suspected them to be bot accounts. Proponents of the dead internet theory have used these accounts as an example. [2] [10]

Elon Musk's acquisition of Twitter

The proportion of user accounts run by bots became a major issue during Elon Musk's acquisition of Twitter. [25] [26] [27] [28] During this process, Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots. [25] [29] During this dispute, Musk commissioned the company Cybra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and the second estimating 11%. [25] These bot accounts are thought to be responsible for a disproportionate amount of the content generated. This incident has been pointed to by believers in the dead internet theory as evidence. [30]

YouTube "The Inversion"

There is a market online for fake YouTube views to boost a video's credibility and reach broader audiences. [31] At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones. [31] [2] YouTube engineers coined the term "the Inversion" to describe this phenomenon. [31] [14] YouTube bots and the fear of "the Inversion" were cited as support for the dead internet theory in a thread on the internet forum Agora Road's Macintosh Cafe. [2]

Discussion on Twitter

The dead internet theory has been discussed among users of the social media platform X (formerly Twitter). Users have noted that bot activity has affected their experience. [2]

Coverage on YouTube

Numerous YouTube channels and online communities, including the Linus Tech Tips forums, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse. [2]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

<span class="mw-page-title-main">Sam Altman</span> American entrepreneur and investor (born 1985)

Samuel Harris Altman is an American entrepreneur and investor best known as the CEO of OpenAI since 2019. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.

/pol/, short for Politically Incorrect, is an anonymous political discussion imageboard on 4chan. As of 2022, it is the most active board on the site. It has had a substantial impact on Internet culture. It has acted as a platform for far-right extremism; the board is notable for its widespread racist, white supremacist, antisemitic, anti-Muslim, misogynist, and anti-LGBT content. /pol/ has been linked to various acts of real-world extremist violence. It has been described as one of the "[centers] of 4chan mobilization", a title also ascribed to /b/.

<span class="mw-page-title-main">OpenAI</span> Artificial intelligence research organization

OpenAI is a U.S.-based artificial intelligence (AI) research organization founded in December 2015, researching artificial intelligence with the goal of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As one of the leading organizations of the AI boom, it has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with starting the AI boom.

A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.

Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. It can also refer to computer-generated images of human subjects that do not exist in real life. While the act of creating fake content is not new, deepfakes leverage tools and techniques from machine learning and artificial intelligence, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.

<span class="mw-page-title-main">Timeline of computing 2020–present</span> Historical timeline

This article presents a detailed timeline of events in the history of computing from 2020 to the present. For narratives explaining the overall developments, see the history of computing.

Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media refers to any form of media, including but not limited to images, videos, audio recordings, and text, that are generated or manipulated using artificial intelligence (AI) techniques. This technology enables the creation of highly realistic content that may be indistinguishable from authentic media produced by humans. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.

Deepfake pornography, or simply fake pornography, is a type of synthetic porn that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor, GPT-2, it is a decoder-only transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention". This attention mechanism allows the model to selectively focus on segments of input text it predicts to be most relevant. It uses a 2048-tokens-long context, float16 (16-bit) precision, and a hitherto-unprecedented 175 billion parameters, requiring 350GB of storage space as each parameter takes 2 bytes of space, and has demonstrated strong "zero-shot" and "few-shot" learning abilities on many tasks.

<span class="mw-page-title-main">GPT-2</span> 2019 text-generating language model

Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.

<span class="mw-page-title-main">DALL-E</span> Image-generating deep-learning model

DALL·E, DALL·E 2, and DALL·E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions, called "prompts."

<span class="mw-page-title-main">Midjourney</span> Image-generating machine learning model

Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco–based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.

<span class="mw-page-title-main">ChatGPT</span> Chatbot developed by OpenAI

ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on large language models, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive user prompts and replies are considered at each conversation stage as context.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

<span class="mw-page-title-main">AI boom</span> Rapid progress in artificial intelligence

The AI boom, or AI spring, is the ongoing period of rapid progress in the field of artificial intelligence (AI). Prominent examples include protein folding prediction led by Google DeepMind and generative AI led by OpenAI.

<span class="mw-page-title-main">Auto-GPT</span> Autonomous AI agent

Auto-GPT is an open-source "AI agent" that, given a goal in natural language, will attempt to achieve it by breaking it into sub-tasks and using the Internet and other tools in an automatic loop. It uses OpenAI's GPT-4 or GPT-3.5 APIs, and is among the first examples of an application using GPT-4 to perform autonomous tasks.

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI, based on a large language model (LLM). It was developed as an initiative by Elon Musk as a direct response to the rise of OpenAI's ChatGPT which Musk co-founded. The chatbot is advertised as "having a sense of humor" and direct access to Twitter (X). It is currently under beta testing for those with the premium version of X.

The AI era, also known as the AI revolution, is the ongoing period of global transition of the human economy and society towards post-scarcity economics and post-labor society through automation, enabled by the integration of AI technology in an increasing number of economic sectors and aspects of everyday life. Many have suggested that this period started around the early 2020s, with the release of generative AI models including large language models such as ChatGPT, which replicated aspects of human cognition, reasoning, attention, creativity and general intelligence commonly associated with human abilities. This enabled software programs that were capable of replacing or augmenting humans in various domains that traditionally required human reasoning and cognition, such as writing, translation, and computer programming.

References

  1. Walter, Y. (February 5, 2024). "Artificial influencers and the dead internet theory". AI & Society. doi:10.1007/s00146-023-01857-0. Archived from the original on February 8, 2024. Retrieved February 8, 2024.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Tiffany, Kaitlyn (August 31, 2021). "Maybe You Missed It, but the Internet 'Died' Five Years Ago" . The Atlantic. Archived from the original on March 6, 2023. Retrieved March 6, 2023.
  3. Dao, Bridgit (2023). The Metaweb The Next Level of the Internet. CRC Press. ISBN   9781000960495. Archived from the original on March 19, 2024. Retrieved March 1, 2024.
  4. 1 2 Vladisavljević, Radovan; Stojković, Predrag; Marković, Svetlana; Krstić, Tamara (2023). "New challenges of formulating a company's marketing strategy based on social network analysis". In Premović, Jelena (ed.). Challenges of modern economy and society through the prism of green economy and sustainable development. Educational and business center for development of human resources, management, and sustainable development. pp. 374–380. ISBN   978-86-81506-23-3.
  5. 1 2 3 4 5 Mariani, Robert (2023). "The Dead Internet to Come". The New Atlantis . 73: 34–42. Archived from the original on January 23, 2024. Retrieved January 23, 2024.
  6. Gonzales III, Vic (June 28, 2023). "The Internet is Dead: The Truth Behind the Dead Internet Theory". Capiz News. Archived from the original on July 4, 2023. Retrieved July 4, 2023.
  7. 1 2 3 4 Hennessy, James (December 18, 2022). "Did A.I. just become a better storyteller than you?". The Story. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  8. "Une théorie du complot affirme qu'internet est « mort » depuis 2016". Ouest France (in French). September 6, 2021. Archived from the original on March 6, 2023. Retrieved March 6, 2023.
  9. Codreanu, Claudiu (2023). Policy Paper Nr. 35/2023: Departe de utopii și distopii. Impactul AI asuprasecurității cibernetice (PDF). Institutul Diplomatic Român. Retrieved April 7, 2024.
  10. 1 2 3 Gopani, Avi (September 6, 2021). "Conspiracy Theorists Says The Internet Has Been Dead Since 2016". Analytics India Magazine. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  11. Dow, Warren (January 9, 2023). "The Dead Internet Theory". Digs. Archived from the original on May 18, 2023. Retrieved May 18, 2023.
  12. 1 2 Naraharisetty, Rohitha (October 31, 2022). "What the 'Dead Internet Theory' Predicted About the Future of Digital Life". The Swaddle. Archived from the original on March 6, 2023. Retrieved March 6, 2023.
  13. 1 2 3 Felton, James (February 1, 2024). "Dead Internet Theory: According To Conspiracy Theorists, The Internet Died In 2016". iflscience. Retrieved April 7, 2024.
  14. 1 2 Read, Max (December 26, 2018). "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually". New York:Intelligencer. Archived from the original on June 19, 2023. Retrieved June 19, 2023.
  15. "Generative AI: a game-changer society needs to be ready for". World Economic Forum. January 9, 2023. Archived from the original on April 25, 2023. Retrieved June 16, 2023.
  16. "The A to Z of Artificial Intelligence". Time. April 13, 2023. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  17. "Improving language understanding with unsupervised learning". openai.com. Archived from the original on March 18, 2023. Retrieved March 18, 2023.
  18. 1 2 Hvitved, Sofie (February 24, 2022). "What if 98% of the Metaverse is made by AI?". Copenhagen Institute for Future Studies. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  19. Tucker, Elizabeth (March 5, 2024). "New ways we're tackling spammy, low-quality content on Search". Archived from the original on March 9, 2024. Retrieved March 9, 2024.
  20. Serrano, Jody (March 5, 2024). "Google Says It's Purging All the AI Trash Littering Its Search Results". Gizmodo. Archived from the original on March 9, 2024. Retrieved March 9, 2024.
  21. 1 2 3 Beres, Damon (January 27, 2023). "Death by a Thousand Personality Quizzes". The Atlantic. Archived from the original on June 21, 2023. Retrieved June 20, 2023.
  22. 1 2 3 4 5 Agarwal, Shubham (August 8, 2023). "AI is ruining the internet". Business Insider. Archived from the original on September 28, 2023. Retrieved September 30, 2023.
  23. LaFrance, Adrienne (January 31, 2017). "The Internet Is Mostly Bots". The Atlantic. Archived from the original on June 17, 2023. Retrieved June 17, 2023.
  24. Grantham-Philips, Wyatte (June 16, 2023). "The Reddit blackout, explained: Why thousands of subreddits are protesting third-party app charges". Associated Press . Archived from the original on June 21, 2023. Retrieved June 21, 2023.
  25. 1 2 3 Duffy, Clare; Fung, Brian (October 10, 2022). "Elon Musk commissioned this bot analysis in his fight with Twitter. Now it shows what he could face if he takes over the platform". CNN Business. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  26. O'brien, Matt (October 31, 2022). "Musk now gets chance to defeat Twitter's many fake accounts". AP News. Archived from the original on May 5, 2023. Retrieved June 16, 2023.
  27. "As Twitter's new owner, Musk gets his chance to defeat bots". CBS News. October 31, 2022. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  28. Syme, Pete (June 13, 2023). "Elon Musk's war against Twitter bots isn't going very well. Next, you'll have to pay to DM those who don't follow you". Business Insider. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  29. Picchi, Aimee (May 17, 2022). "What are Twitter bots, and why is Elon Musk obsessed with them?". CBS News. Archived from the original on June 16, 2023. Retrieved June 16, 2023.
  30. Hughes, Neil C. (August 26, 2023). "Echoes of the dead internet theory: AI's silent takeover". Cybernews. Archived from the original on November 10, 2023. Retrieved November 10, 2023.
  31. 1 2 3 Keller, Michael H. (August 11, 2018). "The Flourishing Business of Fake YouTube Views". The New York Times . Archived from the original on June 19, 2023. Retrieved June 19, 2023.