"AI slop", often simply "slop", is a term for low-quality media, including writing and images, made using generative artificial intelligence technology. [4] [5] [1] Coined in the 2020s, the term has a derogatory connotation akin to "spam". [4]
It has been variously defined as "digital clutter", [6] "filler content produced by AI tools that prioritize speed and quantity over substance and quality", [6] and "shoddy or unwanted AI content in social media, art, books and, increasingly, in search results". [7]
Jonathan Gilmore, Professor of Philosophy at the City University of New York, describes the "incredibly banal, realistic style" of AI slop as being "very easy to process". [8]
As large language models (LLMs) and image diffusion models accelerated the creation of high-volume but low-quality written content and images, discussion commenced for the appropriate term for the volume. Terms proposed included "AI garbage", "AI pollution", and "AI-generated dross". [5] Early uses of the term "slop" as a descriptor for low-grade AI material apparently came in reaction to the release of AI image generators in 2022. [7] Its early use has been noted among 4chan, Hacker News and YouTube commentators as a form of in-group slang. [7]
The British computer-programmer Simon Willison is credited for being an early champion of the term "slop" in the mainstream, [1] [7] which he did in May 2024 on his personal blog. [9] However, he has said it was in use long before he began pushing for the term. [7]
The term gained increased popularity in second quarter 2024 in part because of Google's use of its Gemini AI model to generate responses to search queries, [7] and was widely used in media headlines by the fourth quarter of 2024. [1] [4]
Research found that training LLMs on slop causes model collapse: a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity. [10]
AI image and video slop proliferated on social media in part because it was revenue generating for its creators on Facebook and TikTok. This incentivizes individuals from developing countries to create images that appeal to audiences in the United States which attract higher advertising rates. [11] [12] [13]
The journalist Jason Koebler speculated that the bizarre nature of some of the content may be due to the creators using Hindi, Urdu, and Vietnamese prompts (languages which are underrepresented in the model's training data), or using erratic speech-to-text methods to translate their intentions into English. [11]
Speaking to New York magazine, a Kenyan creator of slop images described giving ChatGPT prompts such as "WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK", and then feeding those created prompts into a text-to-image AI service such as Midjourney. [4]
In August 2024, The Atlantic noted that AI slop was becoming associated with the political right in the United States, who were using it for shitposting and engagement farming on social media, the technology offering "cheap, fast, on-demand fodder for content". [14]
In the aftermath of Hurricane Helene in the United States, members of the Republican Party circulated an AI-generated image of a young girl holding a puppy in a flood, and used it as evidence of the failure of President Joe Biden to respond to the disaster. [15] [3] Some, like Amy Kremer, shared the image on social media even while acknowledging that it was not genuine. [16] [17]
Fantastical promotional graphics for the 2024 Willy's Chocolate Experience event, characterized as "AI-generated slop", [18] misled audiences into attending an event that was held in a cheaply decorated warehouse. Tickets were marketed through Facebook advertisements showing AI-generated imagery, with no genuine photographs of the venue. [19]
In October 2024, thousands of people were reported to have assembled for a non-existent Halloween parade in Dublin as a result of a listing on an aggregation listings website, MySpiritHalloween.com, which used AI-generated content. [20] [21] The listing went viral on TikTok and Instagram. [22] While a similar parade had been held in Galway, and Dublin had hosted parades in prior years, there was no parade in Dublin in 2024. [21] One analyst characterized the website, which appeared to use AI-generated staff pictures, as likely using artificial intelligence "to create content quickly and cheaply where opportunities are found". [23] The site's owner said that "We asked ChatGPT to write the article for us, but it wasn't ChatGPT by itself." In the past the site had removed non-existent events when contacted by their venues, but in the case of the Dublin parade the site owner said that "no one reported that this one wasn't going to happen". MySpiritHalloween.com updated their page to say that the parade had been "canceled" when they became aware of the issue. [24]
In November 2024, Coca-Cola used artificial intelligence to create three commercials as part of their annual holiday campaign. These videos were immediately met with negative reception from both casual viewers and artists, [25] with animator Alex Hirsch, creator of Gravity Falls , criticizing the company's decision to not employ human artists to create the commercial. [26] In response to the negative feedback, the company defended their decision to use generative artificial intelligence stating that "Coca-Cola will always remain dedicated to creating the highest level of work at the intersection of human creativity and technology". [27]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
A content farm or content mill is a company that employs freelance creators or uses artificial intelligence (AI) tools to generate a large amount of web content specifically designed to satisfy algorithms for maximal retrieval by search engines, a practice known as search engine optimization (SEO). The primary goal is to attract page views and generate advertising revenue. Their emergence is often tied to the demand for "true market demand" content based on search engine queries.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
DALL-E, DALL-E 2, and DALL-E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as prompts.
Midjourney is a generative artificial intelligence program and service created and hosted by the San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. It is one of the technologies of the AI boom.
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
The dead Internet theory is an online conspiracy theory that asserts, due to a coordinated and intentional effort, the Internet since 2016 or 2017 has consisted mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity. Proponents of the theory believe these social bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. Some proponents of the theory accuse government agencies of using bots to manipulate public perception. The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
The AI boom is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind as well as large language models and generative AI applications developed by OpenAI. This period is sometimes referred to as an AI spring, to contrast it with previous AI winters.
In the 2020s, the rapid advancement of deep learning-based generative artificial intelligence models raised questions about whether copyright infringement occurs when such are trained or used. This includes text-to-image models such as Stable Diffusion and large language models such as ChatGPT. As of 2023, there were several pending U.S. lawsuits challenging the use of copyrighted data to train AI models, with defendants arguing that this falls under fair use.
Artificial intelligence detection software aims to determine whether some content was generated using artificial intelligence (AI).
Aleph Alpha GmbH is a German artificial intelligence (AI) startup company founded by Jonas Andrulis and Samuel Weinbach, both of whom have professional experience at companies such as Apple and Deloitte. Based in Heidelberg, the company aims to develop a sovereign technology stack for generative AI that operates independently of U.S. companies and complies with European data protection regulations, including the Artificial Intelligence Act. Aleph Alpha has established reportedly one of the most powerful AI clusters within its own data center, and specializes in developing large language models (LLM). These models are designed to provide transparency regarding the sources used for generating results and are intended for use by enterprises and governmental agencies. The training of these models has been conducted in five European languages.
Copyleaks is a plagiarism detection platform that uses artificial intelligence (AI) to identify similar and identical content across various formats.
Apple Intelligence is an artificial intelligence system developed by Apple Inc. Relying on a combination of on-device and server processing, it was announced on June 10, 2024, at WWDC 2024, as a built-in feature of Apple's iOS 18, iPadOS 18, and macOS Sequoia, which were announced alongside Apple Intelligence. Apple Intelligence is free for all users with supported devices. It launched for developers and testers on July 29, 2024, in U.S. English, with the iOS 18.1, macOS 15.1, and iPadOS 18.1 developer betas, released partially in October 2024, and will fully launch by 2025. UK, Australia, Canada, New Zealand, and South African localized versions of English gained support on December 11, 2024, while Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese will be added over the course of 2025. Apple Intelligence is also set to start rolling out in the European Union in April 2025.