The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation, minimising organic human activity to manipulate the population. [1] [2] [3] [4] [5] Proponents of the theory believe these bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. [5] [6] Some proponents of the theory accuse government agencies of using bots to manipulate public perception. [2] [5] The date given for this "death" was generally around 2016 or 2017. [2] [7] [8]
The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature does not support the full theory. [2] [4] [9] Caroline Busta, founder of the media platform New Models, was quoted in an article in The Atlantic calling much of the dead Internet theory a "paranoid fantasy", even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea". [2] In an article in The New Atlantis , Robert Mariani called the theory a mix between a genuine conspiracy theory and a creepypasta. [5] The dead Internet theory is sometimes used refer to the observable increase in content generated via LLMs such as ChatGPT appearing in popular Internet spaces without mention of the full theory. [1] [10]
The dead Internet theory's exact origin is difficult to pinpoint, but it most likely emerged from 4chan or Wizardchan as a theoretical concept in the late 2010s or early 2020s. [2] In 2021, a thread titled "Dead Internet Theory: Most Of The Internet Is Fake" was published on the forum Agora Road's Macintosh Cafe, marking the term's spread beyond these initial imageboards. [2] [11] But discussion and debate of the theory have been prevalent in online forums, technology conferences, and academic circles, possibly since earlier. [2] It was inspired by concerns about the Internet's increasing complexity, dependence on fragile infrastructure, potential cyberattack vulnerabilities, and, most importantly, the exponential increase in artificial intelligence capabilities and use. [12]
The theory gained traction in discussions among technology enthusiasts, researchers, and futurists who sought to explore the risks associated with our reliance on the Internet. The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels. [2] It gained more mainstream attention with an article in The Atlantic titled "Maybe You Missed It, but the Internet 'Died' Five Years Ago". [2] This article has been widely cited by other articles on the topic. [13] [11]
In recent years, the term has been used to describe the phenomena of bot-generated content displacing human-generated content without discussion of the full theory. [1] [10]
The dead Internet theory has two main components: that bots have displaced human activity on the Internet and that actors are employing these bots to manipulate the human population. [14] The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" in New York magazine. [2] [15] [14] The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally employing these bots to manipulate the human population for a variety of reasons. [2] [14] In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population." [2] [5]
Generative pre-trained transformers (GPTs) are a class of large language models (LLMs) that employ artificial neural networks to produce human-like content. [16] [17] The first of these was developed by OpenAI. [18] These models have created significant controversy. For example, Timothy Shoup of the Copenhagen Institute for Futures Studies has said, "in the scenario where GPT-3 'gets loose', the internet would be completely unrecognizable". [19] He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030. [19] These predictions have been used as evidence for the dead internet theory. [13] In 2024, Google reported that its search results were being inundated with websites that "feel like they were created for search engines instead of people." [20] In correspondence with Gizmodo, a Google spokesperson acknowledged the role of generative AI in the rapid proliferation of such content and that it could displace more valuable human-made alternatives. [21]
ChatGPT is an AI chatbot whose 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before. [7] [22] Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT puts the power of AI in the hands of average internet users. [7] [22] This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content. [7] [22] [23]
In 2016, the security firm Imperva released a report on bot traffic and found that bots were responsible for 52% of web traffic. [24] This report has been used as evidence in reports on the dead Internet theory. [2]
In 2024, AI-generated images on Facebook began going viral. Subjects of these AI-generated images included various iterations of Jesus covered in shrimp posing with a flight attendant and images of black children next to the artwork they supposedly created. [25] This has been used as an example for why the Internet feels "dead." [26]
In the past, the social media site Reddit allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction. [23] Controversially, Reddit moved to charge for access to its user dataset. Companies training AI will likely continue to use this data for training future AI. As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts. [23] Professor Toby Walsh of the University of New South Wales said in an interview with Business Insider that training the next generation of AI on content created by previous generations could cause the content to suffer. [23] University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory. [23]
Several Twitter accounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me". [2] These posts received tens of thousands of likes, and many suspected them to be bot accounts. Proponents of the dead internet theory have used these accounts as an example. [2] [11]
The proportion of user accounts run by bots became a major issue during Elon Musk's acquisition of Twitter. [28] [29] [30] [31] During this process, Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots. [28] [32] During this dispute, Musk commissioned the company Cybra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and the second estimating 11%. [28] These bot accounts are thought to be responsible for a disproportionate amount of the content generated. This incident has been pointed to by believers in the dead internet theory as evidence. [33]
There is a market online for fake YouTube views to boost a video's credibility and reach broader audiences. [34] At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones. [34] [2] YouTube engineers coined the term "the Inversion" to describe this phenomenon. [34] [15] YouTube bots and the fear of "the Inversion" were cited as support for the dead internet theory in a thread on the internet forum Agora Road's Macintosh Cafe. [2]
The dead internet theory has been discussed among users of the social media platform X (formerly Twitter). Users have noted that bot activity has affected their experience. [2]
Numerous YouTube channels and online communities, including the Linus Tech Tips forums, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse. [2]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.
Samuel Harris Altman is an American entrepreneur and investor best known as the CEO of OpenAI since 2019. He is also CEO of AltC Acquisition Corp since 2021. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.
/pol/, short for Politically Incorrect, is an anonymous political discussion imageboard on 4chan. As of 2022, it is the most active board on the site. It has had a substantial impact on Internet culture. It has acted as a platform for far-right extremism; the board is notable for its widespread racist, white supremacist, antisemitic, anti-Muslim, misogynist, and anti-LGBT content. /pol/ has been linked to various acts of real-world extremist violence. It has been described as one of the "[centers] of 4chan mobilization", a title also ascribed to /b/.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015, researching artificial intelligence with the goal of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As one of the leading organizations of the AI boom, it has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with starting the AI boom.
A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.
Deepfakes are synthetic media that have been digitally manipulated to replace one person's likeness convincingly with that of another. It can also refer to computer-generated images of human subjects that do not exist in real life. While the act of creating fake content is not new, deepfakes leverage tools and techniques from machine learning and artificial intelligence, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). In turn the field of image forensics develops techniques to detect manipulated images.
Artificial intelligence is used in Wikipedia and other Wikimedia projects for the purpose of developing those projects. Human and bot interaction in Wikimedia projects is routine and iterative.
This article presents a detailed timeline of events in the history of computing from 2020 to the present. For narratives explaining the overall developments, see the history of computing.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.
DALL·E, DALL·E 2, and DALL·E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions, called "prompts."
Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco–based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.
ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on large language models (LLMs), it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive user prompts and replies are considered at each conversation stage as context.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there’s a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.
Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.
Grok is a generative artificial intelligence chatbot developed by xAI, based on a large language model (LLM). It was developed as an initiative by Elon Musk as a direct response to the rise of OpenAI's ChatGPT which Musk co-founded. The chatbot is advertised as "having a sense of humor" and direct access to Twitter (X). It is currently under beta testing for those with the premium version of X.