This article may be confusing or unclear to readers.(July 2010) |
The confederate effect is the phenomena of people falsely classifying human intelligence as machine (or artificial) intelligence during Turing tests. For example, in the Loebner Prize during which a tester conducts a text exchange with one human and one artificial-intelligence chatbot and is tasked to identify which is which, the confederate effect describes the tester inaccurately identifying the human as the machine. [1]
The confederate effect is the reverse of the ELIZA effect, which Sherry Turkle states is humans' "more general tendency to treat responsive computer programs as more intelligent than they really are": [2] that is, anthropomorphizing.
The phenomenon was seen in the University of Surrey 2003 Loebner Prize for Artificial Intelligence, when both confederate (tested) humans, one male and one female, were each ranked as machine by at least one judge. More precisely, Judge 7 and Judge 9 ranked the female 'Confederate 2' as "1.00=definitely a machine"; the male 'Confederate 1' was ranked "1.00=definitely a machine" by Judge 4 and Judge 9. [3] Also, the gender of these two hidden-humans were incorrectly identified (male considered female; woman considered man) in independent transcript analysis ('gender-blurring' phenomenon, see Shah & Henry, 2005). [1]
The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation.
Sherry Turkle is the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at the Massachusetts Institute of Technology. She obtained an AB in Social Studies and later a PhD in Sociology and Personality Psychology at Harvard University. She now focuses her research on psychoanalysis and human-technology interaction. She has written several books focusing on the psychology of human relationships with technology, especially in the realm of how people relate to computational objects.
The Loebner Prize was an annual competition in artificial intelligence that awards prizes to the computer programs considered by the judges to be the most human-like. The prize is reported as defunct since 2020. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously holds textual conversations with a computer program and a human being via computer. Based upon the responses, the judge must decide which is which.
"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.
Hugh Loebner was an American inventor and social activist, who was notable for sponsoring the Loebner Prize, an embodiment of the Turing test. Loebner held six United States Patents, and was also an outspoken advocate for the decriminalization of prostitution.
Jabberwacky is a chatterbot created by British programmer Rollo Carpenter. Its stated aim is to "simulate natural human chat in an interesting, entertaining and humorous manner". It is an early attempt at creating an artificial intelligence through human interaction.
A Reverse Turing test is a Turing test in which the objective or roles between computers and humans have been reversed. Conventionally, the Turing test is conceived as having a human judge and a computer subject which attempts to appear human. The intent of this conventional test is for the judge to attempt to distinguish which of these two situations is actually occurring. It is presumed that a human subject will always be judged human, and a computer is then said to "pass the Turing test" if it is also judged human. Critical to the concept is the parallel situation of a human judge and a human subject, who also attempts to appear human. Any of these roles may be changed to form a "reverse Turing test".
Rollo Carpenter is the British-born creator of Jabberwacky and Cleverbot, learning Artificial Intelligence (AI) software. Carpenter has worked as CTO of a business software startup in Silicon Valley. His brother is the artist Merlin Carpenter.
Ned Joel Block is an American philosopher working in philosophy of mind who has made important contributions to the understanding of consciousness and the philosophy of cognitive science. He has been professor of philosophy and psychology at New York University since 1996.
The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.
The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses.
The Verbot (Verbal-Robot) was a popular chatterbot program and Artificial Intelligence Software Development Kit (SDK) for the Windows platform and for the web.
There are a number of competitions and prizes to promote research in artificial intelligence.
Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks. However, within the field of computer science, artificial stupidity is also used to refer to a technique of "dumbing down" computer programs in order to deliberately introduce errors in their responses.
Kenneth Mark Colby was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a person with paranoid schizophrenia and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.
The Computer Game Bot Turing Test is a variant of the Turing Test, where a human judge viewing and interacting with a virtual world must distinguish between other humans and game bots, both interacting with the same virtual world. This variant was first proposed in 2008 by Associate Professor Philip Hingston of Edith Cowan University, and implemented through a tournament called the 2K BotPrize.
Eugene Goostman is a chatbot that some regard as having passed the Turing test, a test of a computer's ability to communicate indistinguishably from a human. Developed in Saint Petersburg in 2001 by a group of three programmers, the Russian-born Vladimir Veselov, Ukrainian-born Eugene Demchenko, and Russian-born Sergey Ulasen, Goostman is portrayed as a 13-year-old Ukrainian boy—characteristics that are intended to induce forgiveness in those with whom it interacts for its grammatical errors and lack of general knowledge.
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.
The Winograd schema challenge (WSC) is a test of machine intelligence proposed by Hector Levesque, a computer scientist at the University of Toronto. Designed to be an improvement on the Turing test, it is a multiple-choice test that employs questions of a very specific structure: they are instances of what are called Winograd schemas, named after Terry Winograd, professor of computer science at Stanford University.