Artificial intelligence rhetoric

Last updated

ChatGPT presenting arguments for and against a potential rule change to a soccer tournament UK national football team considering compete in UEFA Euro and FIFA World Cup - ChatGPT.jpg
ChatGPT presenting arguments for and against a potential rule change to a soccer tournament

Artificial intelligence rhetoric (or AI rhetoric) is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.

Contents

Description

Persuasive text and persuasive digital speech can be examined as AI rhetoric when the text or speech is a product or output of advanced machines that mimic human communication in some way. Historical examples of fictional artificial intelligence capable of speech are portrayed in mythology, folk tales, and science fiction. [1] Modern computer technology from the mid-20th century began producing what can be studied as real-world examples of AI rhetoric with programs like Joseph Weizenbaum's ELIZA, while chatbot development in the 1990s further enhanced a foundation for texts produced by generative AI programs of the 21st century. [2]

From an additional perspective, AI rhetoric may be understood as the natural language humans use, either typewritten or spoken, to prompt and direct AI technologies in persuasive ways (as opposed to traditional computer coding). This is closely related to the concepts of prompt engineering and prompt hacking. [3]

History

While much of the research related to artificial intelligence was historically conducted by computer scientists, experts across a wide range of subjects (such as cognitive science, philosophy, languages, and cultural studies) have contributed to a more robust understanding of AI for decades. [4] The advent of 21st-century AI technologies like ChatGPT generated a swell of interest from the arts and humanities. Generative AI technology and chatbots gained notoriety and rapid widespread use in the 2020s. [5]

Questions and theories about the power of machines, computers, and robots to persuasively communicate date back to the very beginnings of computer development, more than a decade before the first computer language programs were created and tested. In 1950, Alan Turing imagined a scenario called the imitation game where a machine using only typewritten communication might be successfully programmed to fool a human reader into believing the machine's responses came from a person. [6] By the 1960s, computer programs using basic natural language processing, such as Joseph Weizenbaum's ELIZA, began to pass Turing's test as human research subjects reading the machine's outputs became "very hard to convince that ELIZA is not human." [7] Future computer language programs would build on Weizenbaum's work, but the first generation of internet chatbots in the 1990s up to the virtual assistants of the 2010s (like Apple's Siri and Amazon's Alexa) received harsh criticism for their less-than-humanlike responses and inability to reason in a helpful manner. [8]

By the late 1980s and early 1990s, scholars in the humanities began laying the groundwork for AI rhetoric to become a recognized area of study. Michael L. Johnson's Mind, Language, Machine: Artificial Intelligence in the Poststructuralist Age argued for the "interdisciplinary synthesis" necessary to guide an understanding of the relationship between AI and rhetoric. [9] Lynette Hunter, Professor of the History of Rhetoric and Performance at the University of California, Davis, published "Rhetoric and Artificial Intelligence" in 1991, and was among the first to directly apply the lens of rhetoric to AI. [10]

Twenty-first century developments in the scholarship of AI rhetoric are outlined in the July 2024 special issue of Rhetoric Society Quarterly, which is devoted to "Rhetoric of/with AI". [11] Special issue editors S. Scott Graham and Zoltan P. Majdik summarize the state of the field when they write "rhetorical research related to AI engages all manner of specialty domains [...] Because AI now touches on almost all areas of human activity, rhetorics of AI can help contribute to longstanding discussions in rhetoric of science, rhetoric of health and medicine, cultural rhetorics, public address, writing studies, ideological rhetoric, and many other areas. But studies on the rhetoric of AI can also offer many insights to the broader, interdisciplinary study of AI itself." [11] :223–4

Media coverage

Since ChatGPT's release in 2022, many prominent publications have focused on the uncanny persuasive capabilities of language-based generative AI models like chatbots. New York Times technology columnist Kevin Roose wrote a viral piece in 2023 about how a Microsoft AI named Sydney attempted to convince him to leave his wife, and he followed up with a 2024 article explaining "a new world of A.I. manipulation" where users can rely on creative prompt engineering to influence the outputs of generative AI programs. [12] A February 2024 report cited by the journal Nature claims to "provide the first empirical evidence demonstrating how content generated by artificial intelligence can scale personalized persuasion", with only limited information about the message recipient. [13] Psychology Today reported on a 2024 study using the attention-grabbing headline, "AI is Becoming More Persuasive Than Humans." [14]

AI rhetoric in education

In addition to AI's rhetorical capabilities gaining attention in the media in the early 2020s, many colleges and universities began offering undergraduate, graduate, and certificate courses in AI prompting and AI rhetoric, with titles like Stanford's "Rhetoric of artificial intelligence and robots" [15] and the University of Florida's "The Rhetoric of Artificial Intelligence". [16] Primary and secondary schools designing and implementing AI literacy curricula also incorporate AI rhetoric concepts into lessons on AI bias and ethical usage of AI. [17]

See also

Related Research Articles

<span class="mw-page-title-main">ELIZA</span> Early natural language processing computer program

ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.

In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

<span class="mw-page-title-main">Joseph Weizenbaum</span> German American computer scientist (1923–2008)

Joseph Weizenbaum was a German American computer scientist and a professor at MIT. The Weizenbaum Award and the Weizenbaum Institute are named after him.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

A.L.I.C.E., also referred to as Alicebot, or simply Alice, is a natural language processing chatterbot—a program that engages in a conversation with a human by applying some heuristical pattern matching rules to the human's input. It was inspired by Joseph Weizenbaum's classical ELIZA program.

<span class="mw-page-title-main">Virtual assistant</span> Software agent

A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.

LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.

<span class="mw-page-title-main">ChatGPT</span> Chatbot developed by OpenAI

ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.

<span class="mw-page-title-main">Hallucination (artificial intelligence)</span> Erroneous material generated by AI

In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.

<span class="mw-page-title-main">Generative pre-trained transformer</span> Type of large language model

A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.

<span class="mw-page-title-main">Generative artificial intelligence</span> AI system capable of generating content in response to prompts

Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.

<span class="mw-page-title-main">ChatGPT in education</span> Use of ChatGPT in education

Since the public release of ChatGPT by OpenAI in November 2022, the integration of chatbots in education has sparked considerable debate and exploration. Educators' opinions vary widely; while some are skeptical about the benefits of large language models, many see them as valuable tools.

Artificial intelligence in customer experience is the use and development of artificial intelligence (AI) to aid and improve customer experience.

Claude is a family of large language models developed by Anthropic. The first model was released in March 2023.

GigaChat is a generative artificial intelligence chatbot developed by the Russian financial services corporation Sberbank and launched in April 2023. It is positioned as a Russian alternative to ChatGPT.

References

  1. Dobrin, Sidney I. (2023). AI and Writing. Peterborough, Ontario, Canada: Broadview Press. p. 16. ISBN   9781554816514.
  2. Tarnoff, Ben (25 July 2023). "Weizenbaum's nightmares: how the inventor of the first chatbot turned against AI". The Guardian. Retrieved 19 November 2024.
  3. Foley, Christopher (2024). Prompt Engineering: Toward a Rhetoric and Poetics for Neural Network Augmented Authorship in Composition and Rhetoric (Ph.D. thesis). University of Central Florida. Retrieved 19 November 2024.
  4. Lim, Elvin; Chase, Jonathan (8 November 2023). "Interdisciplinarity is a core part of AI's heritage and is entwined with its future". Times Higher Education. Retrieved 6 November 2024.
  5. Hu, Krystal (2 February 2023). "ChatGPT sets record for fastest-growing user base". Reuters. Retrieved 6 November 2024.
  6. Turing, A. M. (1 October 1950). "Computing Machinery And Intelligence". Mind. LIX (236): 433–460. doi:10.1093/mind/LIX.236.433 . Retrieved 6 November 2024.
  7. Weizenbaum, Joseph (1 January 1966). "ELIZA—a computer program for the study of natural language communication between man and machine". Communications of the ACM. 9 (1): 36–45. doi:10.1145/365153.365168. ISSN   0001-0782 . Retrieved 6 November 2024.
  8. Bove, Tristan (6 March 2023). "'They were all dumb as a rock': Microsoft's CEO slams voice assistants like Alexa and his own company's Cortana as A.I. is poised to take over". Fortune. Retrieved 6 November 2024.
  9. Johnson, Michael L. (1988). Mind, language, machine: artificial intelligence in the poststructuralist age. New York: St. Martin's Press. ISBN   9780312004064 . Retrieved 6 November 2024.
  10. Hunter, Lynette (1 November 1991). "Rhetoric and Artificial Intelligence". Rhetorica. 9 (4): 317–340. doi:10.1525/rh.1991.9.4.317 . Retrieved 6 November 2024.
  11. 1 2 "Rhetoric of/with AI". Rhetoric Society Quarterly. 54 (3). 2024. Retrieved 6 November 2024.
  12. Pratschke, B. Mairéad (2023). Generative AI and Education: Digital Pedagogies, Teaching Innovation and Learning Design. Springer. pp. 1, 41-42, 56. ISBN   9783031679919. OCLC   1453752201 Quote: "When ChatGPT-3.5 was launched in November 2022, it stunned the world of education...It is social, chatty, funny, and helpful but also sometimes unpredictable, lazy, rude, manipulative, and prone to bad behaviour, which ranged from attempting to break down a journalist's marriage (Roose, 2023; Yerushalmy, 2023) ... New York Times tech columnist Kevin Roose also had an exchange with Bing's Sydney (the former code name for what is now Microsoft Copilot), which left him 'deeply disturbed' (Roose, 2023a, 2023b). Roose recounted the conversation in an episode of the Hard Fork podcast he co-hosts, which ended with the bot telling him he loved it and trying to convince him to leave his wife. A year later, Roose wrote a follow-up piece, in which he said that—partly thanks to issues like these—chatbots had been overly tamed by their big tech owners and now lacked the creativity that was necessary to tackle big problems, which he considered a loss (Roose, 2024)".
  13. Matz, S. C.; Teeny, J. D.; Vaid, S. S.; Peters, H.; Harari, G. M.; Cerf, M. (26 February 2024). "The potential of generative AI for personalized persuasion at scale". Scientific Reports. 14 (1): 4692. Bibcode:2024NatSR..14.4692M. doi:10.1038/s41598-024-53755-0. ISSN   2045-2322. PMC   10897294 . PMID   38409168.
  14. Mobayed, Tamim (25 March 2024). "AI Is Becoming More Persuasive Than Humans". Psychology Today. Retrieved 6 November 2024.
  15. "PWR 1SBB: Writing & Rhetoric 1: The Rhetoric of Robots and Artificial Intelligence". Stanford Bulletin. Stanford University. Retrieved 6 November 2024.
  16. "The Rhetoric of Artificial Intelligence" (PDF). University of Florida. Retrieved 6 November 2024.
  17. "K-12 AI curricula: A mapping of government-endorsed AI curricula". UNESDOC Digital Library. UNESCO. 2022. Retrieved 6 November 2024.