The dead Internet theory is an online conspiracy theory that asserts, due to a coordinated and intentional effort, the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity. [1] [2] [3] [4] [5] Proponents of the theory believe these social bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. [6] [7] Some proponents of the theory accuse government agencies of using bots to manipulate public perception. [2] [6] The date given for this "death" is generally around 2016 or 2017. [2] [8] [9] The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory. [2] [4] [10]
The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a post titled "Dead Internet Theory: Most Of The Internet Is Fake" was published onto the forum Agora Road's Macintosh Cafe esoteric board by a user named "IlluminatiPirate", [11] claiming to be building on previous posts from the same board and from Wizardchan, [2] and marking the term's spread beyond these initial imageboards. [2] [12] The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels. [2] It gained more mainstream attention with an article in The Atlantic titled "Maybe You Missed It, but the Internet 'Died' Five Years Ago". [2] This article has been widely cited by other articles on the topic. [13] [12]
The dead Internet theory has two main components: that organic human activity on the web has been displaced by bots and algorithmically curated search results, and that state actors are doing this in a coordinated effort to manipulate the human population. [3] [14] [15] The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" in New York magazine. [2] [16] [14] The Dead Internet Theory goes on to include that Google, and other search engines, are censoring the Web by filtering content that is not desirable by limiting what is indexed and presented in search results. [3] While Google may suggest that there are millions of search results for a query, the results available to a user do not reflect that. [3] This problem is exacerbated by the phenomenon known as link rot, which is caused when content at a website becomes unavailable, and all links to it on other sites break. [3] This has led to the theory that Google is a Potemkin village, and the searchable Web is much smaller than we are led to believe. [3] The Dead Internet Theory suggests that this is part of the conspiracy to limit users to curated, and potentially artificial, content online.
The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally limiting users to curated, and potentially artificial AI-generated content, to manipulate the human population for a variety of reasons. [2] [14] [15] [3] In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence-powered gaslighting of the entire world population." [2] [6]
Caroline Busta, founder of the media platform New Models, was quoted in an article in The Atlantic calling much of the dead Internet theory a "paranoid fantasy", even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea". [2] In an article in The New Atlantis , Robert Mariani called the theory a mix between a genuine conspiracy theory and a creepypasta. [6] The dead Internet theory is sometimes used to refer to the observable increase in content generated via large language models (LLMs) such as ChatGPT appearing in popular Internet spaces without mention of the full theory. [1] [17]
Generative pre-trained transformers (GPTs) are a class of large language models (LLMs) that employ artificial neural networks to produce human-like content. [18] [19] The first of these to be well known was developed by OpenAI. [20] These models have created significant controversy. For example, Timothy Shoup of the Copenhagen Institute for Futures Studies said in 2022, "in the scenario where GPT-3 'gets loose', the internet would be completely unrecognizable". [21] He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030. [21] These predictions have been used as evidence for the dead internet theory. [13]
In 2024, Google reported that its search results were being inundated with websites that "feel like they were created for search engines instead of people". [22] In correspondence with Gizmodo, a Google spokesperson acknowledged the role of generative AI in the rapid proliferation of such content and that it could displace more valuable human-made alternatives. [23] Bots using LLMs are anticipated to increase the amount of spam, and run the risk of creating a situation where bots interacting with each other create "self-replicating prompts" that result in loops only human users could disrupt. [5]
ChatGPT is an AI chatbot whose late 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before. [8] [24] Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models. [8] [24] This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content. [8] [24] [25] [5]
In 2016, the security firm Imperva released a report on bot traffic and found that bots were responsible for 52% of web traffic. [26] [27] This report has been used as evidence in reports on the dead Internet theory. [2]
In 2024, AI-generated images on Facebook, referred to as AI "Slop", began going viral. [28] [29] Subjects of these AI-generated images included various iterations of Jesus "meshed in various forms" with shrimp, flight attendants, and Black children next to artwork they supposedly created. Many of those said iterations have hundreds or even thousands of AI comments that say "Amen". [30] [31] These images have been referred as an example for why the Internet feels "dead." [32]
Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour. [33]
In the past, Reddit allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction. [25] Controversially, Reddit moved to charge for access to its user dataset. Companies training AI will likely continue to use this data for training future AI. As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts. [25] Professor Toby Walsh of the University of New South Wales said in an interview with Business Insider that training the next generation of AI on content created by previous generations could cause the content to suffer. [25] University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory. [25]
Since 2020, several Twitter accounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me". [2] These posts received tens of thousands of likes, many of which are suspected to be from bot accounts. Proponents of the dead internet theory have used these accounts as an example. [2] [12]
The proportion of Twitter accounts run by bots became a major issue during Elon Musk's acquisition of the company. [35] [36] [37] [38] Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots. [35] [39] Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%. [35] CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots. [40] Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users. [39] Believers in the dead Internet theory have pointed to this incident as evidence. [41]
In 2024, TikTok began discussing offering the use of virtual influencers to advertisement agencies. [15] In a 2024 article in Fast Company , journalist Michael Grothaus linked this and other AI-generated content on social media to the Dead Internet Theory. [15] In this article, he referred to the content as "AI-slime". [15]
On YouTube, there is a market online for fake views to boost a video's credibility and reach broader audiences. [42] At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones. [42] [2] YouTube engineers coined the term "the Inversion" to describe this phenomenon. [42] [16] YouTube bots and the fear of "the Inversion" were cited as support for the dead internet theory in a thread on the internet forum Melonland. [2]
SocialAI, an app created on September 18, 2024, was created with the full purpose of chatting with only AI bots without human interaction. [43] Its creator was Michael Sayman, a former product lead at Google who also worked at Facebook, Roblox, and Twitter. [43] An article on the Ars Technica website linked SocialAI to the Dead Internet Theory. [43] [44]
The dead internet theory has been discussed among users of the social media platform Twitter. Users have noted that bot activity has affected their experience. [2] Numerous YouTube channels and online communities, including the Linus Tech Tips forums and Joe Rogan subreddit, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse. [2] There has also been discussion and memes about this topic on the app TikTok, due to the fact that AI generated content has become more mainstream.[ attribution needed ]
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the Chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
/pol/, short for Politically Incorrect, is an anonymous political discussion imageboard on 4chan. As of 2022, it is the most active board on the site. It has had a substantial impact on Internet culture. It has acted as a platform for far-right extremism; the board is notable for its widespread racist, white supremacist, antisemitic, Islamophobic, misogynist, and anti-LGBT content. /pol/ has been linked to various acts of real-world extremist violence. It has been described as one of the "[centers] of 4chan mobilization", a title also ascribed to /b/.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.
Artificial intelligence is used in Wikipedia and other Wikimedia projects for the purpose of developing those projects. Human and bot interaction in Wikimedia projects is routine and iterative.
This article presents a detailed timeline of events in the history of computing from 2020 to the present. For narratives explaining the overall developments, see the history of computing.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.
DALL-E, DALL-E 2, and DALL-E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as prompts.
Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.
Character.ai is a neural language model chatbot service that can generate human-like text responses and participate in contextual conversation. Constructed by previous developers of Google's LaMDA, Noam Shazeer and Daniel de Freitas, the beta model was made available to use by the public in September 2022. The beta model has since been retired on September 24, 2024, and can no longer be used.
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses, and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. It was launched on March 14, 2023, and made publicly available via the paid chatbot product ChatGPT Plus, via OpenAI's API, and via the free chatbot Microsoft Copilot. As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and "data licensed from third-party providers" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance.
Enshittification, also known as crapification and platform decay, is a pattern in which online products and services decline in quality. Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to users and business customers to maximize profits for shareholders.
Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently under beta testing.