PARRY

Last updated

PARRY was an early example of a chatbot, implemented in 1972 by psychiatrist Kenneth Colby.

Contents

History

PARRY was written in 1972 by psychiatrist Kenneth Colby, then at Stanford University. [1] While ELIZA was a tongue-in-cheek[ citation needed ] simulation of a Rogerian therapist, PARRY attempted to simulate a person with paranoid schizophrenia. [1] The program implemented a crude model of the behavior of a person with paranoid schizophrenia based on concepts, conceptualizations, and beliefs (judgements about conceptualizations: accept, reject, neutral). It also embodied a conversational strategy, and as such was a much more serious and advanced program than ELIZA. It was described as "ELIZA with attitude". [2]

PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs. [3] The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing. [4]

PARRY and ELIZA (also known as "the Doctor" [5] ) interacted several times. [1] [6] [7] The most famous of these exchanges occurred at the ICCC 1972, where PARRY and ELIZA were hooked up over ARPANET and responded to each other. [7]

See also

Notes and references

  1. 1 2 3 Güven Güzeldere; Stefano Franchi (1995-07-24). "dialogues with colorful personalities of early ai". Stanford Humanities Review, SEHR, volume 4, issue 2: Constructions of the Mind. Stanford University . Retrieved 2008-02-17.
  2. Boden 2006, p. 370.
  3. Colby et al. 1972, p. 220.
  4. Saygin; Cicekli; Akman (2000), "Turing Test: 50 years later" (PDF), Minds and Machines, 10 (4): 463–518, doi:10.1023/A:1011288000451, hdl: 11693/24987 , S2CID   990084
  5. Alan J. Sondheim. "<nettime> Important Documents from the Early Internet (1972)". nettime.org. Archived from the original on 2008-06-13. Retrieved 2008-02-18. – transcript of the 1972 document shows programs DOCTOR (an eliza-type program) at Bolt Beranek and Newman and PARRY at Stanford Artificial Intelligence Laboratory
  6. V. Cerf (21 January 1972). PARRY encounters the DOCTOR. IETF. doi: 10.17487/RFC0439 . RFC 439. – Transcript of a session between Parry and Eliza. (This is not the dialogue from the ICCC, which took place October 24–26, 1972, whereas this session is from September 18, 1972.)
  7. 1 2 "Computer History Museum – Exhibits – Internet History – 1970's". Computer History Museum . Retrieved 2008-02-18.

Related Research Articles

<span class="mw-page-title-main">ELIZA</span> Early natural language processing computer program

ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party. Whereas the ELIZA program itself was written (originally) in MAD-SLIP, the pattern matching directives that contained most of its language capability were provided in separate "scripts", represented in a lisp-like representation. The most famous script, DOCTOR, simulated a psychotherapist of the Rogerian school, and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.

In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand."

Paranoia is an instinct or thought process that is believed to be heavily influenced by anxiety, suspicion, or fear, often to the point of delusion and irrationality. Paranoid thinking typically includes persecutory beliefs, or beliefs of conspiracy concerning a perceived threat towards oneself. Paranoia is distinct from phobias, which also involve irrational fear, but usually no blame.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Sluggish schizophrenia or slow progressive schizophrenia was a diagnostic category used in the Soviet Union to describe what was claimed to be a form of schizophrenia characterized by a slowly progressive course; it was diagnosed even in patients who showed no symptoms of schizophrenia or other psychotic disorders, on the assumption that these symptoms would appear later. It was developed in the 1960s by Soviet psychiatrist Andrei Snezhnevsky and his colleagues, and was used exclusively in the USSR and several Eastern Bloc countries, until the fall of Communism starting in 1989. The diagnosis has long been discredited because of its scientific inadequacy and its use as a means of confining dissenters. It has never been used or recognized outside of the Eastern Bloc, or by international organizations such as the World Health Organization. It is considered a prime example of the political abuse of psychiatry in the Soviet Union.

<span class="mw-page-title-main">Bob Kahn</span> American Internet pioneer, computer scientist

Robert Elliot Kahn is an American electrical engineer who, along with Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet.

<span class="mw-page-title-main">Rosenhan experiment</span> Experiment to determine the validity of psychiatric diagnosis

The Rosenhan experiment or Thud experiment was an experiment conducted to determine the validity of psychiatric diagnosis. Participants submitted themselves for evaluation at various psychiatric institutions and feigned hallucinations in order to be accepted, but acted normally from then onward. Each was diagnosed with psychiatric disorders and were given antipsychotic medication. The study was conducted by psychologist David Rosenhan, a Stanford University professor, and published by the journal Science in 1973 under the title "On Being Sane in Insane Places".

Disorganized schizophrenia, or hebephrenia, was a subtype of schizophrenia prior to 2013. Subtypes of schizophrenia were no longer recognized as separate conditions in the DSM 5, published in 2013. The disorder is no longer listed in the 11th revision of the International Classification of Diseases (ICD-11).

<span class="mw-page-title-main">Robin Murray</span> British psychiatrist and professor (born 1944)

Sir Robin MacGregor Murray FRS is a Scottish psychiatrist, Professor of Psychiatric Research at the Institute of Psychiatry, King's College London. He has treated patients with schizophrenia and bipolar illness referred to the National Psychosis Unit of the South London and Maudsley NHS Trust because they fail to respond to treatment, or cannot get appropriate treatment, locally; he sees patients privately if they are unable to obtain an NHS referral.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

Paraphrenia is a mental disorder characterized by an organized system of paranoid delusions with or without hallucinations and without deterioration of intellect or personality.

Kenneth Mark Colby was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a person with paranoid schizophrenia and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.

Psychiatry is the medical specialty devoted to the diagnosis, prevention, and treatment of deleterious mental conditions. These include various matters related to mood, behaviour, cognition, and perceptions.

The confederate effect is the phenomena of people falsely classifying human intelligence as machine intelligence during Turing tests. For example, in the Loebner Prize during which a tester conducts a text exchange with one human and one artificial-intelligence chatbot and is tasked to identify which is which, the confederate effect describes the tester inaccurately identifying the human as the machine.

<span class="mw-page-title-main">Citizens Commission on Human Rights</span> Scientology-related organization

The Citizens Commission on Human Rights International (CCHR) is a nonprofit organization established in 1969 by the Church of Scientology and psychiatrist Thomas Szasz, headquartered in Los Angeles, California. Its stated mission is to "eradicate abuses committed under the guise of mental health and enact patient and consumer protections." Many critics regard it as a Scientology front group whose purpose is to push the organization's anti-psychiatric agenda.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Simple-type schizophrenia is a sub-type of schizophrenia included in the International Classification of Diseases (ICD-10), in which it is classified as a mental and behaviour disorder. It is not included in the current Diagnostic and Statistical Manual of Mental Disorders (DSM-5) or the upcoming ICD-11, effective 1 January 2022. Simple-type schizophrenia is characterized by negative ("deficit") symptoms, such as avolition, apathy, anhedonia, reduced affect display, lack of initiative, lack of motivation, low activity; with absence of hallucinations or delusions of any kind.