Part of a series on |
Artificial intelligence |
---|
![]() |
The AI effect is the discounting of the behavior of an artificial-intelligence program as not "real" intelligence. [1]
The author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." [2]
Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" [3]
"The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements. [4] [2] [1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon. [4]
McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked." [5] It is an example of moving the goalposts. [6]
Tesler's Theorem is:
AI is whatever hasn't been done yet.
Douglas Hofstadter quotes this [7] as do many other commentators. [8]
When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as a human-assisted Turing machine. [9]
Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields as computer chess, [10] marketing, [11] agricultural automation, [8] hospitality [12] and optical character recognition. [13]
Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world." [14]
According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?" [11]
Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?" [15]
Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore." [16]
The AI effect on decision-making in supply chain risk management is a severely understudied area. [17]
To avoid the AI effect problem, the editors of a special issue of IEEE Software on AI and software engineering recommend not overselling – not hyping – the real achievable results to start with. [18]
The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat. [4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War. [4] AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time. [4] They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day. [4]
Some experts think that the AI effect will continue, with advances in AI continually producing objections and redefinitions of public expectations. [19] [20] [21] Some also believe that the AI effect will expand to include the dismissal of specialised artificial intelligences. [21]
In the early 1990s, during the second "AI winter" many AI researchers found that they could get more funding and sell more software if they avoided the bad name of "artificial intelligence" and instead pretended their work had nothing to do with intelligence.[ citation needed ]
Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding." [22]
Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". [23] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.[ citation needed ]
A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought of as uniquely human is discovered in animals (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[ citation needed ]
Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that." [24]
Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence. [25] This nonetheless suffers from the AI effect however when different humans are used as the standard. [25]
When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation. [26]
The public complained that Deep Blue had only used "brute force methods" and it wasn't real intelligence. [10] Notably, John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue. He described it as a mere brute force machine that did not have any deep understanding of the game. McCarthy would also criticize how widespread the AI effect is ("As soon as it works, no one calls it AI anymore" [27] [28] : 12 ), but in this case did not think that Deep Blue was a good example. [27]
On the other side, Fred A. Reed writes: [29]
A problem that proponents of AI regularly face is this: When we know how a machine does something "intelligent", it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
In the history of artificial intelligence (AI), neat and scruffy are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Moravec's paradox is the observation in the fields of artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated in the 1980s by Hans Moravec, Rodney Brooks, Marvin Minsky, and others. Moravec wrote in 1988: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine's ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
AI@50, formally known as the "Dartmouth Artificial Intelligence Conference: The Next Fifty Years", was a conference organized by James Moor, commemorating the 50th anniversary of the Dartmouth workshop which effectively inaugurated the history of artificial intelligence. Five of the original ten attendees were present: Marvin Minsky, Ray Solomonoff, Oliver Selfridge, Trenchard More, and John McCarthy.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.