Artificial stupidity

Last updated

Artificial stupidity is a term used within the field of computer science to refer to a technique of "dumbing down" computer programs in order to deliberately introduce errors in their responses.

Contents

History

Alan Turing, in his 1950 paper Computing Machinery and Intelligence , proposed a test for intelligence which has since become known as the Turing test. [1] While there are a number of different versions, the original test, described by Turing as being based on the "imitation game", involved a "machine intelligence" (a computer running an AI program), a female participant, and an interrogator. Both the AI and the female participant were to claim that they were female, and the interrogator's task was to work out which was the female participant and which was not by examining the participant's responses to typed questions. [1] While it is not clear whether or not Turing intended that the interrogator was to know that one of the participants was a computer, while discussing some of the possible objections to his argument Turing raised the concern that "machines cannot make mistakes". [1]

It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy.

Turing, 1950, page 448

As Turing then noted, the reply to this is a simple one: the machine should not attempt to "give the right answers to the arithmetic problems". [1] Instead, deliberate errors should be introduced to the computer's responses.

Applications

Within computer science, there are at least two major applications for artificial stupidity: the generation of deliberate errors in chatbots attempting to pass the Turing test or to otherwise fool a participant into believing that they are human; and the deliberate limitation of computer AIs in video games in order to control the game's difficulty.

Chatbots

The first Loebner prize competition was run in 1991. As reported in The Economist , the winning entry incorporated deliberate errors – described by The Economist as "artificial stupidity" – to fool the judges into believing that it was human. [2] This technique has remained a part of the subsequent Loebner prize competitions, and reflects the issue first raised by Turing.

Game design

Lars Lidén argues that good game design involves finding a balance between the computer's "intelligence" and the player's ability to win. By finely tuning the level of "artificial stupidity", it is possible to create computer controlled plays that allow the player to win, but do so "without looking unintelligent". [3]

Algorithms

There are many ways to deliberately introduce poor decision-making in search algorithms. Take the minimax algorithm for example. The minimax algorithm is an adversarial search algorithm that is popularly used in games that require more than one player to compete against each other. The main purpose in this algorithm is to choose a move that maximizes your chance of winning and avoid moves that maximizes the chance of your opponent winning. An algorithm like this would be extremely beneficial to the computer as computers are able to search thousands of moves ahead. To "dumb down" this algorithm to allow for different difficulty levels, heuristic functions have to be tweaked. Normally, huge points are given in winning states. Tweaking the heuristic by reducing such big payoffs would reduce the chance of the algorithm in choosing the winning state.

Creating heuristic functions to allow for stupidity is more difficult than one might think. If a heuristic allows for the best move, the computer opponent would be too omniscient, making the game frustrating and unenjoyable. But if the heuristic is poor, the game might also be unenjoyable. Therefore, a balance of good moves and bad moves in an adversarial game relies on a well-implemented heuristic function.

Arguments on artificial stupidity

A 1993 editorial in The Economist argues that there is "no practical reason" to attempt to create a machine that mimics the behaviour of a human being, since the purpose of a computer is to perform tasks that humans cannot accomplish alone, or at least not as efficiently. Discussing the winning entry in a 1991 Turing contest, which was programmed to introduce deliberate typing errors into its conversation to fool the judges, the editorial asks: "Who needs a computer that can't type?" [2]

Related Research Articles

Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player combinatorial games. It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.

<span class="mw-page-title-main">Loebner Prize</span> Annual AI competition

The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most human-like. The format of the competition was that of a standard Turing test. In each round, a human judge simultaneously held textual conversations with a computer program and a human being via computer. Based upon the responses, the judge would attempt to determine which was which.

"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

<span class="mw-page-title-main">Hugh Loebner</span>

Hugh Loebner was an American inventor and social activist, who was notable for sponsoring the Loebner Prize, an embodiment of the Turing test. Loebner held six United States Patents, and was also an outspoken advocate for the decriminalization of prostitution.

Jabberwacky is a chatterbot created by British programmer Rollo Carpenter. Its stated aim is to "simulate natural human chat in an interesting, entertaining and humorous manner". It is an early attempt at creating an artificial intelligence through human interaction.

In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-playable characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in 1948, first seen in the game Nim. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation. One of the most infamous examples of this NPC technology and gradual difficulty levels can be found in the game Punch-Out!!.

<span class="mw-page-title-main">Robby Garner</span> American natural language programmer and software developer

Robby Garner is an American natural language programmer and software developer. He won the 1998 and 1999 Loebner Prize contests with the program called Albert One. He is listed in the 2001 Guinness Book of World Records as having written the "most human" computer program.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

A.L.I.C.E., also referred to as Alicebot, or simply Alice, is a natural language processing chatterbot—a program that engages in a conversation with a human by applying some heuristical pattern matching rules to the human's input. It was inspired by Joseph Weizenbaum's classical ELIZA program.

The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses.

<span class="mw-page-title-main">Anti-computer tactics</span> Human methods against game-playing computers

Anti-computer tactics are methods used by humans to try to beat computer opponents at various games, most typically board games such as chess and Arimaa. They are most associated with competitions against computer AIs that are playing to their utmost to win, rather than AIs merely programmed to be an interesting challenge that can be given intentional weaknesses and quirks by the programmer. Such tactics are most associated with the era when AIs searched a game tree with an evaluation function looking for promising moves, often with Alpha–beta pruning or other minimax algorithms used to narrow the search. Against such algorithms, a common tactic is to play conservatively aiming for a long-term advantage. The theory is that this advantage will manifest slowly enough that the computer is unable to notice in its search, and the computer won't play around the threat correctly. This may result in, for example, a subtle advantage that eventually turns into a winning chess endgame with a passed pawn.

There are a number of competitions and prizes to promote research in artificial intelligence.

Kenneth Mark Colby was an American psychiatrist dedicated to the theory and application of computer science and artificial intelligence to psychiatry. Colby was a pioneer in the development of computer technology as a tool to try to understand cognitive functions and to assist both patients and doctors in the treatment process. He is perhaps best known for the development of a computer program called PARRY, which mimicked a person with paranoid schizophrenia and could "converse" with others. PARRY sparked serious debate about the possibility and nature of machine intelligence.

<span class="mw-page-title-main">Computer game bot Turing test</span>

The computer game bot Turing test is a variant of the Turing test, where a human judge viewing and interacting with a virtual world must distinguish between other humans and video game bots, both interacting with the same virtual world. This variant was first proposed in 2008 by Associate Professor Philip Hingston of Edith Cowan University, and implemented through a tournament called the 2K BotPrize.

Eugene Goostman is a chatbot that some regard as having passed the Turing test, a test of a computer's ability to communicate indistinguishably from a human. Developed in Saint Petersburg in 2001 by a group of three programmers, the Russian-born Vladimir Veselov, Ukrainian-born Eugene Demchenko, and Russian-born Sergey Ulasen, Goostman is portrayed as a 13-year-old Ukrainian boy—characteristics that are intended to induce forgiveness in those with whom it interacts for its grammatical errors and lack of general knowledge.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

The Winograd schema challenge (WSC) is a test of machine intelligence proposed in 2012 by Hector Levesque, a computer scientist at the University of Toronto. Designed to be an improvement on the Turing test, it is a multiple-choice test that employs questions of a very specific structure: they are instances of what are called Winograd schemas, named after Terry Winograd, professor of computer science at Stanford University.

<i>Turochamp</i> 1948 chess program

Turochamp is a chess program developed by Alan Turing and David Champernowne in 1948. It was created as part of research by the pair into computer science and machine learning. Turochamp is capable of playing an entire chess game against a human player at a low level of play by calculating all potential moves and all potential player moves in response, as well as some further moves it deems considerable. It then assigns point values to each game state, and selects the move resulting in the highest point value.

The history of chess began nearly 1500 years ago, and over the past century and a half the game has changed drastically. No technology or strategy, however, has changed chess as much as the introduction of chess engines. Despite only coming into existence within the previous 70 years, the introduction of chess engines has molded and defined how top chess is played today.

References

  1. 1 2 3 4 Turing, Alan (October 1950). "Computing Machinery and Intelligence". Mind . 59 (236): 433–460. doi:10.1093/mind/LIX.236.433. ISSN   1460-2113. JSTOR   2251299. S2CID   14636783.
  2. 1 2 "Artificial Stupidity", The Economist , vol. 324, no. 7770, p. 14, 1992-09-01
  3. Lidén, Lars (2004). S. Rabin (ed.). "Artificial Stupidity: The art of making intentional mistakes" (PDF). AI Game Programming Wisdom 2. Charles River Media, Inc. pp. 41–48.

Further reading