ELIZA

Last updated
A conversation with Eliza ELIZA conversation.png
A conversation with Eliza

ELIZA is an early natural language processing computer program developed from 1964 to 1967 [1] at MIT by Joseph Weizenbaum. [2] [3] Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. [4] [5] [6] Whereas the ELIZA program itself was written (originally) [7] in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. [8] The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school (in which the therapist often reflects back the patient's words to the patient), [9] [10] [11] and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots ("chatbot" modernly) and one of the first programs capable of attempting the Turing test. [12] [13]

Contents

ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised and shocked that some people, including Weizenbaum's secretary, attributed human-like feelings to the computer program. [3] Many academics believed that the program would be able to positively influence the lives of many people, particularly those with psychological issues, and that it could aid doctors working on such patients' treatment. [3] [14] While ELIZA was capable of engaging in discourse, it could not converse with true understanding. [15] However, many early users were convinced of ELIZA's intelligence and understanding, despite Weizenbaum's insistence to the contrary. [6] The original ELIZA source-code had been missing since its creation in the 1960s as it was not common to publish articles that included source code at that time. However, more recently the MAD-SLIP source-code has now been discovered in the MIT archives and published on various platforms, such as archive.org. [16] The source-code is of high historical interest since it demonstrates not only the specificity of programming languages and techniques at that time, but also the beginning of software layering and abstraction as a means of achieving sophisticated software programming.

Overview

A conversation between a human and ELIZA's DOCTOR script Video Game Museum in Berlin (44129332940).jpg
A conversation between a human and ELIZA's DOCTOR script

Joseph Weizenbaum's ELIZA, running the DOCTOR script, created a conversational interaction somewhat similar to what might take place in the office of "a [non-directive] psychotherapist in an initial psychiatric interview" [17] and to "demonstrate that the communication between man and machine was superficial". [18] While ELIZA is best known for acting in the manner of a psychotherapist, the speech patterns are due to the data and instructions supplied by the DOCTOR script. [19] ELIZA itself examined the text for keywords, applied values to said keywords, and transformed the input into an output; the script that ELIZA ran determined the keywords, set the values of keywords, and set the rules of transformation for the output. [20] Weizenbaum chose to make the DOCTOR script in the context of psychotherapy to "sidestep the problem of giving the program a data base of real-world knowledge", [3] allowing it to reflect back the patient's statements in order to carry the conversation forward. [3] The result was a somewhat intelligent-seeming response that reportedly deceived some early users of the program. [21]

Weizenbaum named his program ELIZA after Eliza Doolittle, a working-class character in George Bernard Shaw's Pygmalion (also appearing in the musical My Fair Lady , which was based on the play and was hugely popular at the time). According to Weizenbaum, ELIZA's ability to be "incrementally improved" by various users made it similar to Eliza Doolittle, [20] since Eliza Doolittle was taught to speak with an upper-class accent in Shaw's play. [9] [22] However, unlike the human character in Shaw's play, ELIZA is incapable of learning new patterns of speech or new words through interaction alone. Edits must be made directly to ELIZA's active script in order to change the manner by which the program operates.

Weizenbaum first implemented ELIZA in his own SLIP list-processing language, where, depending upon the initial entries by the user, the illusion of human intelligence could appear, or be dispelled through several interchanges. [2] Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. [3] Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." [23]

In 1966, interactive computing (via a teletype) was new. It was 11 years before the personal computer became familiar to the general public, and three decades before most people encountered attempts at natural language processing in Internet services like Ask.com or PC help systems such as Microsoft Office Clippit. [24] Although those programs included years of research and work, ELIZA remains a milestone simply because it was the first time a programmer had attempted such a human-machine interaction with the goal of creating the illusion (however brief) of human–human interaction.[ citation needed ]

At the ICCC 1972, ELIZA was brought together with another early artificial-intelligence program named PARRY for a computer-only conversation. While ELIZA was built to speak as a doctor, PARRY was intended to simulate a patient with schizophrenia. [25]

Design

Weizenbaum originally wrote ELIZA in MAD-SLIP for CTSS on an IBM 7094 as a program to make natural-language conversation possible with a computer. [26] To accomplish this, Weizenbaum identified five "fundamental technical problems" for ELIZA to overcome: the identification of key words, the discovery of a minimal context, the choice of appropriate transformations, the generation of responses in the absence of key words, and the provision of an editing capability for ELIZA scripts. [20] Weizenbaum solved these problems and made ELIZA such that it had no built-in contextual framework or universe of discourse. [19] However, this required ELIZA to have a script of instructions on how to respond to inputs from users. [6]

ELIZA starts its process of responding to an input by a user by first examining the text input for a "keyword". [5] A "keyword" is a word designated as important by the acting ELIZA script, which assigns to each keyword a precedence number, or a RANK, designed by the programmer. [15] If such words are found, they are put into a "keystack", with the keyword of the highest RANK at the top. The input sentence is then manipulated and transformed as the rule associated with the keyword of the highest RANK directs. [20] For example, when the DOCTOR script encounters words such as "alike" or "same", it would output a message pertaining to similarity, in this case "In what way?", [4] as these words had high precedence number. This also demonstrates how certain words, as dictated by the script, can be manipulated regardless of contextual considerations, such as switching first-person pronouns and second-person pronouns and vice versa, as these too had high precedence numbers. Such words with high precedence numbers are deemed superior to conversational patterns and are treated independently of contextual patterns.[ citation needed ]

Following the first examination, the next step of the process is to apply an appropriate transformation rule, which includes two parts: the "decomposition rule" and the "reassembly rule". [20] First, the input is reviewed for syntactical patterns in order to establish the minimal context necessary to respond. Using the keywords and other nearby words from the input, different disassembly rules are tested until an appropriate pattern is found. Using the script's rules, the sentence is then "dismantled" and arranged into sections of the component parts as the "decomposition rule for the highest-ranking keyword" dictates. The example that Weizenbaum gives is the input "You are very helpful", which is transformed to "I are very helpful". This is then broken into (1) empty (2) "I" (3) "are" (4) "very helpful". The decomposition rule has broken the phrase into four small segments that contain both the keywords and the information in the sentence. [20]

The decomposition rule then designates a particular reassembly rule, or set of reassembly rules, to follow when reconstructing the sentence. [5] The reassembly rule takes the fragments of the input that the decomposition rule had created, rearranges them, and adds in programmed words to create a response. Using Weizenbaum's example previously stated, such a reassembly rule would take the fragments and apply them to the phrase "What makes you think I am (4)", which would result in "What makes you think I am very helpful?". This example is rather simple, since depending upon the disassembly rule, the output could be significantly more complex and use more of the input from the user. However, from this reassembly, ELIZA then sends the constructed sentence to the user in the form of text on the screen. [20]

These steps represent the bulk of the procedures that ELIZA follows in order to create a response from a typical input, though there are several specialized situations that ELIZA/DOCTOR can respond to. One Weizenbaum specifically wrote about was when there is no keyword. One solution was to have ELIZA respond with a remark that lacked content, such as "I see" or "Please go on". [20] The second method was to use a "MEMORY" structure, which recorded prior recent inputs, and would use these inputs to create a response referencing a part of the earlier conversation when encountered with no keywords. [27] This was possible due to Slip's ability to tag words for other usage, which simultaneously allowed ELIZA to examine, store, and repurpose words for usage in outputs. [20]

While these functions were all framed in ELIZA's programming, the exact manner by which the program dismantled, examined, and reassembled inputs is determined by the operating script. The script is not static and can be edited, or a new one created, as is necessary for the operation in the context needed. This would allow the program to be applied in multiple situations, including the well-known DOCTOR script, which simulates a Rogerian psychotherapist. [16]

A Lisp version of ELIZA, based on Weizenbaum's CACM paper, was written shortly after that paper's publication by Bernie Cosell. [28] [29] A BASIC version appeared in Creative Computing in 1977 (although it was written in 1973 by Jeff Shrager). [30] This version, which was ported to many of the earliest personal computers, appears to have been subsequently translated into many other versions in many other languages. Shrager claims not to have seen either Weizenbaum's or Cosell's versions.

In 2021, Jeff Shrager searched MIT's Weizenbaum archives, along with MIT archivist Myles Crowley, and found files labeled Computer Conversations. These included the complete source code listing of ELIZA in MAD-SLIP, with the DOCTOR script attached. The Weizenbaum estate has given permission to open-source this code under a Creative Commons CC0 public domain license. The code and other information can be found on the ELIZAGEN site. [29]

Another version of Eliza popular among software engineers is the version that comes with the default release of GNU Emacs, and which can be accessed by typing M-x doctor from most modern Emacs implementations.

Pseudocode

From Figure 15.5, Chapter 15 of Speech and Language Processing (third edition). [31]

function ELIZA GENERATOR(user sentence) returns response    Let w be the word in sentence that has the highest keyword rank    if w exists        Let r be the highest ranked rule for w that matches sentence        response ← Apply the transform in r to sentence        if w = 'my'            future ← Apply a transformation from the ‘memory’ rule list to sentence            Push future onto the memory queue        else (no keyword applies)            Either                response ← Apply the transform for the NONE keyword to sentence            Or                response ← Pop the oldest response from the memory queue    Return response

Response and legacy

Lay responses to ELIZA were disturbing to Weizenbaum and motivated him to write his book Computer Power and Human Reason: From Judgment to Calculation , in which he explains the limits of computers, as he wants to make clear his opinion that the anthropomorphic views of computers are just a reduction of human beings or any life form for that matter. [32] In the independent documentary film Plug & Pray (2010) Weizenbaum said that only people who misunderstood ELIZA called it a sensation. [33]

David Avidan, who was fascinated with future technologies and their relation to art, desired to explore the use of computers for writing literature. He conducted several conversations with an APL implementation of ELIZA and published them – in English, and in his own translation to Hebrew – under the title My Electronic Psychiatrist – Eight Authentic Talks with a Computer. In the foreword, he presented it as a form of constrained writing. [34]

There are many programs based on ELIZA in different programming languages. For MS-DOS computers, some Sound Blaster cards came bundled with Dr. Sbaitso, which functions like the DOCTOR script. Other versions adapted ELIZA around a religious theme, such as ones featuring Jesus (both serious and comedic), and another Apple II variant called I Am Buddha. The 1980 game The Prisoner incorporated ELIZA-style interaction within its gameplay. In 1988, the British artist and friend of Weizenbaum Brian Reffin Smith created two art-oriented ELIZA-style programs written in BASIC, one called "Critic" and the other "Artist", running on two separate Amiga 1000 computers and showed them at the exhibition "Salamandre" in the Musée du Berry, Bourges, France. The visitor was supposed to help them converse by typing in to "Artist" what "Critic" said, and vice versa. The secret was that the two programs were identical. GNU Emacs formerly had a psychoanalyze-pinhead command that simulates a session between ELIZA and Zippy the Pinhead. [35] The Zippyisms were removed due to copyright issues, but the DOCTOR program remains.

ELIZA has been referenced in popular culture and continues to be a source of inspiration for programmers and developers focused on artificial intelligence. It was also featured in a 2012 exhibit at Harvard University titled "Go Ask A.L.I.C.E.", as part of a celebration of mathematician Alan Turing's 100th birthday. The exhibit explores Turing's lifelong fascination with the interaction between humans and computers, pointing to ELIZA as one of the earliest realizations of Turing's ideas. [1]

ELIZA won a 2021 Legacy Peabody Award. A 2023 preprint reported that ELIZA beat OpenAI's GPT-3.5, the model used by ChatGPT at the time, in a Turing test study. However, it did not outperform GPT-4 or real humans. [36] [37]

Eliza Effect

The Eliza effect borrowed its name from ELIZA the chatbot. This effect is first defined in Fluid Concepts and Creative Analogies: Computer Models and the Fundamental Mechanisms of Thought [38] as humans’ assumption of which computer programs understand the user inputs and make analogies. However, it has no permanent knowledge but “handling a list of ‘assertions’.”

This misunderstanding can potentially manipulate and misinform users. When interacting and communicating with chatbots, users can be overly confident in the reliability of the chatbots’ answers. Other than misinforming, the chatbot’s human-mimicking nature can also cause severe consequences, especially for younger users who lack a sufficient understanding of the chatbot’s mechanism.

Results of Eliza Effect

Although chatbots can only communicate to users in limited ways, it can be fatally dangerous.

In 2023, a young Belgian man commited suicide after talking to Eliza, an AI chatbot on Chai. He discussed his concerns about climate change and hoped for technologies to solve it. As this belief progressed, he saw the chatbot as a sentient being and decided to sacrificing his life for Eliza to save humanity. [39]

On February 28, 2024, the chatbot from Character.AI. induced Sewell Setzer III, a 14-year-old ninth grader from Orlando, Fla, to commit suicide. [40] Although Setzer has the knowledge of chatbot being a program that has no personality, he still has a strong emotional attachment to it. Through the Eliza effect, the chatbot generates misleading scripts that result in unexpected consequences, disobeying its original intention.

In 1969, George Lucas and Walter Murch incorporated an Eliza-like dialogue interface in their screenplay for the feature film THX-1138 . Inhabitants of the underground future world of THX, when stressed, would retreat to "confession booths" and initiate a one-sided Eliza-formula conversation with a Jesus-faced computer who claimed to be "OMM".[ citation needed ]

Frederik Pohl's science-fiction novel Gateway has the narrator undergo therapy at a praxis run by an AI that performs the task of a Freudian therapist, which he calls "Sigfrid von Shrink". The novel contains a few pages of (nonsensical) machine code illustrating Sigfrid's internal processes.

ELIZA influenced a number of early computer games by demonstrating additional kinds of interface designs. Don Daglow claims he wrote an enhanced version of the program called Ecala on a DEC PDP-10 minicomputer at Pomona College in 1973.[ citation needed ]

The 2011 video game Deus Ex: Human Revolution and the 2016 sequel Deus Ex: Mankind Divided features an artificial-intelligence Picus TV Network newsreader named Eliza Cassan. [41]

In Adam Curtis's 2016 documentary, HyperNormalisation , ELIZA was referenced in relationship to post-truth. [42]

The twelfth episode of the American sitcom Young Sheldon , aired in January 2018, included the protagonist "conversing" with ELIZA, hoping to resolve a domestic issue. [43]

On August 12, 2019, independent game developer Zachtronics published a visual novel called Eliza , about an AI-based counseling service inspired by ELIZA. [44] [45]

In A Murder at the End of the World , the anthropomorphic LLM-powered character Ray cites ELIZA as an example of how some may seek refuge in a non-human therapist.

Concerns

Bias

When ELIZA was created in 1966, it was meant predominantly for white, male, individuals with high education. This exclusivity was especially prevalent during the creation and testing stages of the bot, which marginalized the experience of those intended users and those who did not fit into the characteristics mentioned. [10] Although this chatbot was meant to mimic human conversation with the goal of making the user think it is human, and those users would typically converse with others like them, ELIZA was named after a female character and programmed to give more feminine responses. Joseph Weizenbaum, the creator of ELIZA, has reflected upon and critiqued how ELIZA and other chatbots of the sort reinforce gender stereotypes. In particular, Weizenbaum reflects on how the script ELIZA is programmed to follow mimics a therapist’s nurturing and feminine qualities. [3] He criticizes this decision by acknowledging that when technologies such as chatbots are created in such a way, they reinforce the idea that emotional and nurturing jobs are inherently feminine.

Accuracy and responsiveness

ELIZA's design, while a pioneering chatbot of its time, unveils the need to reevaluate the Turing Test's relevance in assessing AI capabilities.

In a study titled “Does GPT-4 Pass the Turing Test?” by University of California, San Diego researchers Cameron R. Jones and Benjamin K. Bergen where they explored the performance of various AI models, including ELIZA, GPT-3.5, and GPT-4, alongside human participants in imitating human conversation, they ended up highlighting several factors that contributed to ELIZA’s surprising performance. First factor was its conservative response style that minimized the risk of providing misleading or incorrect information that could have exposed that it’s a machine since ELIZA was processed to build its response around a single keyword from users which meant that its accuracy was limited to syntactic response built upon predefined patterns. Further, researchers observed an absence of characteristic traits that were found in modern AI - such as helpfulness or excessive verbosity - that led them to view ELIZA as an uncooperative human. [46] Ultimately, the researchers, Cameron R. Jones and Benjamin K. Bergen from UC San Diego, pointed out how the Turing Test that led to ELIZA’s roar need to be critically reevaluated in the light of findings about both current and historical AI systems as ELIZA’s role in the case of ongoing conversation seems to be relevant only because of the parameters put forward by the Turing Test in 1949 which was initially thought of as an experiment and not an actual test. [47] In the same research, they - Cameron R. Jones and Benjamin K. Bergen from UC San Diego - also observed how ELIZA ignores grammatical structure and the context of the sentence. This causes an issue as ELIZA suffers from the inability to parse sentence structures that leads to less meaningful responses. This could also stem from its knowledge gap about the topic being discussed. Unlike modern models, due to this ELIZA could not put things into a broader context. ELIZA’s responsiveness is scripted and seems rigid and the reason is rooted in its underlying design, thus, changing its programming would not change its response patterns and sentence handling, rather would add more to its complexity as seen in the information shared by University of Birmingham’s excerpt What ELIZA lacks that say when a user states “Computers worry me,” ELIZA cannot relate this to any broader context, and often can generalize between the previous statement and “I’m not worried much by computers”. [48] [47] This calls for an AI capable of meaningful guidance.

Related Research Articles

In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

<span class="mw-page-title-main">Joseph Weizenbaum</span> German American computer scientist (1923–2008)

Joseph Weizenbaum was a German American computer scientist and a professor at MIT. The Weizenbaum Award and the Weizenbaum Institute are named after him.

Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Natural language generation (NLG) is a software process that produces natural language output. A widely-cited survey of NLG methods describes NLG as "the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems that can produce understandable texts in English or other human languages from some underlying non-linguistic representation of information".

An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.

In the field of human–computer interaction, a Wizard of Oz experiment is a research experiment in which subjects interact with a computer system that subjects believe to be autonomous, but which is actually being operated or partially operated by an unseen human being.

A.L.I.C.E., also referred to as Alicebot, or simply Alice, is a natural language processing chatterbot—a program that engages in a conversation with a human by applying some heuristical pattern matching rules to the human's input. It was inspired by Joseph Weizenbaum's classical ELIZA program.

Kenneth Mark Colby was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a person with paranoid schizophrenia and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.

Natural-language user interface is a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications.

<span class="mw-page-title-main">Virtual assistant</span> Software agent

A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

<span class="mw-page-title-main">Cleverbot</span> Chatbot web application

Cleverbot is a chatterbot web application. It was created by British AI scientist Rollo Carpenter and launched in October 2008. It was preceded by Jabberwacky, a chatbot project that began in 1988 and went online in 1997. In its first decade, Cleverbot held several thousand conversations with Carpenter and his associates. Since launching on the web, the number of conversations held has exceeded 150 million. Besides the web application, Cleverbot is also available as an iOS, Android, and Windows Phone app.

The following outline is provided as an overview of and topical guide to natural-language processing:

ChatScript is a combination Natural Language engine and dialog management system designed initially for creating chatbots, but is currently also used for various forms of NL processing. It is written in C++. The engine is an open source project at SourceForge. and GitHub.

LaMDA is a family of conversational large language models developed by Google. Originally developed and introduced as Meena in 2020, the first-generation LaMDA was announced during the 2021 Google I/O keynote, while the second generation was announced the following year.

<span class="mw-page-title-main">ChatGPT</span> Chatbot developed by OpenAI

ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.

<span class="mw-page-title-main">Hallucination (artificial intelligence)</span> Erroneous material generated by AI

In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.

<span class="mw-page-title-main">Artificial intelligence rhetoric</span> Persuasive text and speech created by artificial intelligence

Artificial intelligence rhetoric is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.

References

  1. 1 2 "Alan Turing at 100". Harvard Gazette. 13 September 2012. Retrieved 2016-02-22.
  2. 1 2 Berry, David M. (2018). "Weizenbaum, ELIZA and the End of Human Reason". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern[Hello, I'm Eliza: Fifty Years of Conversations with Computers] (in German) (1st ed.). Berlin: Projekt Verlag. pp. 53–70. ISBN   9783897334670.
  3. 1 2 3 4 5 6 7 Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W. H. Freeman and Company. ISBN   0-7167-0464-1.
  4. 1 2 Norvig, Peter (1992). Paradigms of Artificial Intelligence Programming. New York: Morgan Kaufmann Publishers. pp. 151–154. ISBN   1-55860-191-0.
  5. 1 2 3 Weizenbaum, Joseph (January 1966). "ELIZA--A Computer Program for the Study of Natural Language Communication Between Man and Machine" (PDF). Communications of the ACM. 9: 36–45. doi:10.1145/365153.365168. S2CID   1896290 via universelle-automation.
  6. 1 2 3 Baranovska, Marianna; Höltgen, Stefan, eds. (2018). Hello, I'm Eliza fünfzig Jahre Gespräche mit Computern (1st ed.). Bochum: Bochum Freiburg projektverlag. ISBN   978-3-89733-467-0. OCLC   1080933718.
  7. "ELIZAGEN - The Original ELIZA". sites.google.com. Retrieved 2021-05-31.
  8. Berry, David M. (2023-11-06). "The Limits of Computation: Joseph Weizenbaum and the ELIZA Chatbot". Weizenbaum Journal of the Digital Society. 3 (3). doi:10.34669/WI.WJDS/3.3.2. ISSN   2748-5625.
  9. 1 2 Dillon, Sarah (2020-01-02). "The Eliza effect and its dangers: from demystification to gender critique". Journal for Cultural Research. 24 (1): 1–15. doi:10.1080/14797585.2020.1754642. ISSN   1479-7585. S2CID   219465727.
  10. 1 2 Bassett, Caroline (2019). "The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present". AI & Society. 34 (4): 803–812. doi: 10.1007/s00146-018-0825-9 .
  11. "The Samantha Test". The New Yorker . Archived from the original on 2020-07-31. Retrieved 2019-05-25.
  12. Marino, Mark (2006). Chatbot: The Gender and Race Performativity of Conversational Agents. University of California.
  13. Marino, Mark C.; Berry, Dav id M. (2024-11-03). "Reading ELIZA: Critical Code Studies in Action". Electronic Book Review.
  14. Colby, Kenneth Mark; Watt, James B.; Gilbert, John P. (1966). "A Computer Method of Psychotherapy". The Journal of Nervous and Mental Disease. 142 (2): 148–52. doi:10.1097/00005053-196602000-00005. PMID   5936301. S2CID   36947398.
  15. 1 2 Shah, Huma; Warwick, Kevin; Vallverdú, Jordi; Wu, Defeng (2016). "Can machines talk? Comparison of Eliza with modern dialogue systems" (PDF). Computers in Human Behavior. 58: 278–95. doi:10.1016/j.chb.2016.01.004.
  16. 1 2 Shrager, Jeff; Berry, David M.; Hay, Anthony; Millican, Peter (2022). "Finding ELIZA - Rediscovering Weizenbaum's Source Code, Comments and Faksimiles". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern (2nd ed.). Berlin: Projekt Verlag. pp. 247–248.
  17. Weizenbaum 1976, p. 188.
  18. Epstein, J.; Klinkenberg, W. D. (2001). "From Eliza to Internet: A brief history of computerized assessment". Computers in Human Behavior. 17 (3): 295–314. doi:10.1016/S0747-5632(01)00004-8.
  19. 1 2 Wortzel, Adrianne (2007). "ELIZA REDUX: A Mutable Iteration". Leonardo. 40 (1): 31–6. doi:10.1162/leon.2007.40.1.31. JSTOR   20206337. S2CID   57565169.
  20. 1 2 3 4 5 6 7 8 9 Weizenbaum, Joseph (1966). "ELIZA—a computer program for the study of natural language communication between man and machine". Communications of the ACM. 9: 36–45. doi: 10.1145/365153.365168 . S2CID   1896290.
  21. Wardrip-Fruin, Noah (2009). Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge, Massachusetts: MIT Press. p. 33. ISBN   9780262013437. OCLC   827013290.
  22. Markoff, John (2008-03-13), "Joseph Weizenbaum, Famed Programmer, Is Dead at 85", The New York Times , retrieved 2009-01-07.
  23. Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation . W. H. Freeman. p.  7.
  24. Meyer, Robinson (2015-06-23). "Even Early Focus Groups Hated Clippy". The Atlantic. Retrieved 2023-11-07.
  25. Megan, Garber (Jun 9, 2014). "When PARRY Met ELIZA: A Ridiculous Chatbot Conversation From 1972". The Atlantic. Archived from the original on 2017-01-18. Retrieved 19 January 2017.
  26. Walden, David; Van Vleck, Tom, eds. (2011). "Compatible Time-Sharing System (1961-1973): Fiftieth Anniversary Commemorative Overview" (PDF). IEEE Computer Society. Retrieved February 20, 2022. Joe Wiezenbaum's most famous CTSS project was ELIZA
  27. Wardip-Fruin, Noah (2014). Expressive Processing: Digital Fictions, Computer Games, and Software Studies. Cambridge: The MIT Press. p. 33. ISBN   9780262013437 via eBook Collection (EBSCOhost).
  28. "Coders at Work: Bernie Cosell". codersatwork.com.
  29. 1 2 "elizagen.org". elizagen.org.
  30. Big Computer Games: Eliza – Your own psychotherapist at www.atariarchives.org.
  31. "Chatbots & Dialogue Systems" (PDF). stanford.edu. Retrieved 6 April 2023.
  32. Berry, David M. (2014). Critical theory and the digital. London: Bloomsbury Publishing. ISBN   978-1-4411-1830-1. OCLC   868488916.
  33. maschafilm. "Content: Plug & Pray Film – Artificial Intelligence – Robots". plugandpray-film.de.
  34. Avidan, David (2010), Collected Poems, vol. 3, Jerusalem: Hakibbutz Hameuchad, OCLC   804664009 .
  35. "lol:> psychoanalyze-pinhead". IBM . Archived from the original on October 23, 2007.
  36. Edwards, Benj (2023-12-01). "1960s chatbot ELIZA beat OpenAI's GPT-3.5 in a recent Turing test study". Ars Technica. Retrieved 2023-12-03.
  37. Jones, Cameron R.; Bergen, Benjamin K. (2024-04-20), Does GPT-4 pass the Turing test?, arXiv: 2310.20216 , retrieved 2024-10-06
  38. Hofstadter, Douglas R. (1995). Fluid concepts & creative analogies: computer models of the fundamental mechanisms of thought. Fluid Analogies Research Group. New York, NY: Basic Books. ISBN   978-0-465-02475-9.
  39. Atillah, Imane El (March 31, 2023). "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change". Euro News. Retrieved November 28, 2024.{{cite web}}: CS1 maint: url-status (link)
  40. Roose, Kevin (October 24, 2024). "Can A.I. Be Blamed for a Teen's Suicide?".{{cite web}}: CS1 maint: url-status (link)
  41. Tassi, Paul. "'Deus Ex: Mankind Divided's Ending Is Disappointing In A Different Way". Forbes. Retrieved 2020-04-04.
  42. "The Quietus | Opinion | Black Sky Thinking | HyperNormalisation: Is Adam Curtis, Like Trump, Just A Master Manipulator?". The Quietus. 6 October 2016. Retrieved 26 June 2021.
  43. McCarthy, Tyler (2018-01-18). "Young Sheldon Episode 12 recap: The family's first computer almost tears it apart". Fox News. Retrieved 2018-01-24.
  44. O'Connor, Alice (2019-08-01). "The next Zachtronics game is Eliza, a visual novel about AI". Rock Paper Shotgun . Retrieved 2019-08-01.
  45. Machkovech, Sam (August 12, 2019). "Eliza review: Startup culture meets sci-fi in a touching, fascinating tale". Ars Technica . Retrieved August 12, 2019.
  46. Jones, Cameron; Bergen, Benjamin. "Does GPT-4 Pass the Turing Test?". ResearchGate.
  47. 1 2 Biever, Celeste (2023-07-25). "ChatGPT broke the Turing test — the race is on for new ways to assess AI". Nature. 619 (7971): 686–689. doi:10.1038/d41586-023-02361-7.
  48. "What Eliza Lacks". poplogarchive.getpoplog.org. Retrieved 2024-11-29.

Bibliography

Further reading