Artificial intelligence rhetoric (or AI rhetoric) is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.
Persuasive text and persuasive digital speech can be examined as AI rhetoric when the text or speech is a product or output of advanced machines that mimic human communication in some way. Historical examples of fictional artificial intelligence capable of speech are portrayed in mythology, folk tales, and science fiction. [1] Modern computer technology from the mid-20th century began producing what can be studied as real-world examples of AI rhetoric with programs like Joseph Weizenbaum's ELIZA, while chatbot development in the 1990s further enhanced a foundation for texts produced by generative AI programs of the 21st century. [2]
From an additional perspective, AI rhetoric may be understood as the natural language humans use, either typewritten or spoken, to prompt and direct AI technologies in persuasive ways (as opposed to traditional computer coding). This is closely related to the concepts of prompt engineering and prompt hacking. [3]
While much of the research related to artificial intelligence was historically conducted by computer scientists, experts across a wide range of subjects (such as cognitive science, philosophy, languages, and cultural studies) have contributed to a more robust understanding of AI for decades. [4] The advent of 21st-century AI technologies like ChatGPT generated a swell of interest from the arts and humanities. Generative AI technology and chatbots gained notoriety and rapid widespread use in the 2020s. [5]
Questions and theories about the power of machines, computers, and robots to persuasively communicate date back to the very beginnings of computer development, more than a decade before the first computer language programs were created and tested. In 1950, Alan Turing imagined a scenario called the imitation game where a machine using only typewritten communication might be successfully programmed to fool a human reader into believing the machine's responses came from a person. [6] By the 1960s, computer programs using basic natural language processing, such as Joseph Weizenbaum's ELIZA, began to pass Turing's test as human research subjects reading the machine's outputs became "very hard to convince that ELIZA is not human." [7] Future computer language programs would build on Weizenbaum's work, but the first generation of internet chatbots in the 1990s up to the virtual assistants of the 2010s (like Apple's Siri and Amazon's Alexa) received harsh criticism for their less-than-humanlike responses and inability to reason in a helpful manner. [8]
By the late 1980s and early 1990s, scholars in the humanities began laying the groundwork for AI rhetoric to become a recognized area of study. Michael L. Johnson's Mind, Language, Machine: Artificial Intelligence in the Poststructuralist Age argued for the "interdisciplinary synthesis" necessary to guide an understanding of the relationship between AI and rhetoric. [9] Lynette Hunter, Professor of the History of Rhetoric and Performance at the University of California, Davis, published "Rhetoric and Artificial Intelligence" in 1991, and was among the first to directly apply the lens of rhetoric to AI. [10]
Twenty-first century developments in the scholarship of AI rhetoric are outlined in the July 2024 special issue of Rhetoric Society Quarterly, which is devoted to "Rhetoric of/with AI". [11] Special issue editors S. Scott Graham and Zoltan P. Majdik summarize the state of the field when they write "rhetorical research related to AI engages all manner of specialty domains [...] Because AI now touches on almost all areas of human activity, rhetorics of AI can help contribute to longstanding discussions in rhetoric of science, rhetoric of health and medicine, cultural rhetorics, public address, writing studies, ideological rhetoric, and many other areas. But studies on the rhetoric of AI can also offer many insights to the broader, interdisciplinary study of AI itself." [11] : 223–4
Since ChatGPT's release in 2022, many prominent publications have focused on the uncanny persuasive capabilities of language-based generative AI models like chatbots. New York Times technology columnist Kevin Roose wrote a viral piece in 2023 about how a Microsoft AI named Sydney attempted to convince him to leave his wife, and he followed up with a 2024 article explaining "a new world of A.I. manipulation" where users can rely on creative prompt engineering to influence the outputs of generative AI programs. [12] A February 2024 report cited by the journal Nature claims to "provide the first empirical evidence demonstrating how content generated by artificial intelligence can scale personalized persuasion", with only limited information about the message recipient. [13] Psychology Today reported on a 2024 study using the attention-grabbing headline, "AI is Becoming More Persuasive Than Humans." [14]
In addition to AI's rhetorical capabilities gaining attention in the media in the early 2020s, many colleges and universities began offering undergraduate, graduate, and certificate courses in AI prompting and AI rhetoric, with titles like Stanford's "Rhetoric of artificial intelligence and robots" [15] and the University of Florida's "The Rhetoric of Artificial Intelligence". [16] Primary and secondary schools designing and implementing AI literacy curricula also incorporate AI rhetoric concepts into lessons on AI bias and ethical usage of AI. [17]
ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.
In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
Joseph Weizenbaum was a German American computer scientist and a professor at MIT. The Weizenbaum Award and the Weizenbaum Institute are named after him.
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A.L.I.C.E., also referred to as Alicebot, or simply Alice, is a natural language processing chatterbot—a program that engages in a conversation with a human by applying some heuristical pattern matching rules to the human's input. It was inspired by Joseph Weizenbaum's classical ELIZA program.
A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Since the public release of ChatGPT by OpenAI in November 2022, the integration of chatbots in education has sparked considerable debate and exploration. Educators' opinions vary widely; while some are skeptical about the benefits of large language models, many see them as valuable tools.
Artificial intelligence in customer experience is the use and development of artificial intelligence (AI) to aid and improve customer experience.
Claude is a family of large language models developed by Anthropic. The first model was released in March 2023.
GigaChat is a generative artificial intelligence chatbot developed by the Russian financial services corporation Sberbank and launched in April 2023. It is positioned as a Russian alternative to ChatGPT.