AI effect

Last updated

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not "real" intelligence. [1]

Contents

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." [2] Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'" [3]

Definition

"The AI effect" is that line of thinking, the tendency to redefine AI to mean: "AI is anything that has not been done yet." This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the "AI effect". [4]

McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked." [5] It is an example of moving the goalposts. [6]

Tesler's Theorem is:

"AI is whatever hasn't been done yet."
Larry Tesler

Douglas Hofstadter quotes this [7] as do many other commentators. [8]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as a human-assisted Turing machine. [9]

AI applications become mainstream

Monsanto's Future -- Farming in 2030 (8427734799).jpg

Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields as computer chess, [10] marketing, [11] agricultural automation [8] and hospitality. [12]

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world." [13]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?" [11]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?" [14]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore." [15]

The AI effect on decision-making in supply chain risk management is a severely understudied area. [16]

To avoid the AI effect problem, the editors of a special issue of IEEE Software on AI and software engineering recommend not overselling not hyping the real achievable results to start with. [17]

The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat. [4] They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War. [4] AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time. [4] They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day. [4]

Some experts think that the AI effect will continue, with advances in AI continually producing objections and redefinitions of public expectations. [18] [19] [20] Some also believe that the AI effect will expand to include the dismissal of specialised artificial intelligences. [20]

Legacy of the AI winter

In the early 1990s, during the second "AI winter" many AI researchers found that they could get more funding and sell more software if they avoided the bad name of "artificial intelligence" and instead pretended their work has nothing to do with intelligence at all.[ citation needed ]

Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding." [21]

Saving a place for humanity at the top of the chain of being

Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". [22] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.[ citation needed ]

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.[ citation needed ]

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that." [23]

Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence. [24] This nonetheless suffers from the AI effect however when different humans are used as the standard. [24]

Game 6 Deep Blue versus Kasparov, 1997, Game 6.gif
Game 6

Deep Blue defeats Kasparov

When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation. [25]

The public complained that Deep Blue had only used "brute force methods" and it wasn't real intelligence. [10] Notably, John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue. He described it as a mere brute force machine that did not have any deep understanding of the game. McCarthy would also criticize how widespread the AI effect is ("As soon as it works, no one calls it AI anymore" [26] [27] :12), but in this case did not think that Deep Blue was a good example. [26]

On the other side, Fred A. Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright." [28]

See also

Notes

  1. Haenlein, Michael; Kaplan, Andreas (2019). "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence". California Management Review . 61 (4): 5–14. doi:10.1177/0008125619864925. S2CID   199866730.
  2. McCorduck 2004 , p. 204
  3. Kahn, Jennifer (March 2002). "It's Alive". Wired . Vol. 10, no. 30. Retrieved 24 Aug 2008.
  4. 1 2 3 4 5 Geist, Edward (2016). "It's already too late to stop the AI arms race—We must manage it instead". Bulletin of the Atomic Scientists . Taylor & Francis. 72 (5: The psychology of doom): 318–321. Bibcode:2016BuAtS..72e.318G. doi:10.1080/00963402.2016.1216672. S2CID   151967826. Bulletin of the Atomic Scientists.
  5. McCorduck 2004 , p. 423
  6. Nadin, Mihai (2023). "Intelligence at any price? A criterion for defining AI". AI & Society . Springer Science and Business Media LLC. doi:10.1007/s00146-023-01695-0. ISSN   0951-5666. S2CID   259041703.
  7. As quoted by Hofstadter (1980 , p. 601). Larry Tesler actually feels he was misquoted: see his note in the "Adages" section of .
  8. 1 2 Bhatnagar, Roheet; Tripathi, Kumar; Bhatnagar, Nitu; Panda, Chandan (2022). The Digital Agricultural Revolution : Innovations and Challenges in Agriculture Through Technology Disruptions. Hoboken, NJ, US: Scrivener Publishing LLC (John Wiley & Sons, Inc.). pp. 143–170. doi:10.1002/9781119823469. ISBN   978-1-119-82346-9. OCLC   1314054445. ISBN   9781119823339.
  9. Dafna Shahaf and Eyal Amir (2007) Towards a theory of AI completeness. Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning.
  10. 1 2 McCorduck 2004 , p. 433
  11. 1 2 Stottler Henke. "AI Glossary". Archived from the original on 2008-05-09. Retrieved 2009-02-23.
  12. Xiang, Zheng; Fuchs, Matthias; Gretzel, Ulrike; Höpken, Wolfram, eds. (2020). Handbook of e-Tourism (PDF). Cham, Switzerland: Springer International Publishing. p. 1945. doi:10.1007/978-3-030-05324-6. ISBN   978-3-030-05324-6. S2CID   242136095.
  13. Swaine, Michael (September 5, 2007). "AI - It's OK Again! Is AI on the rise again?". Dr. Dobbs.
  14. Marvin Minsky. "The Age of Intelligent Machines: Thoughts About Artificial Intelligence". Archived from the original on 2009-06-28.
  15. Quoted in "AI set to exceed human brain power". CNN.com. July 26, 2006.
  16. Nayal, Kirti; Raut, Rakesh; Priyadarshinee, Pragati; Narkhede, Balkrishna Eknath; Kazancoglu, Yigit; Narwane, Vaibhav (2021). "Exploring the role of artificial intelligence in managing agricultural supply chain risk to counter the impacts of the COVID-19 pandemic". The International Journal of Logistics Management. 33 (3): 744–772. doi:10.1108/IJLM-12-2020-0493. S2CID   237807857.
  17. Carleton, Anita; Harper, Erin; Menzies, Tim; Xie, Tao; Eldh, Sigrid; Lyu, Michael (2020). "The AI Effect: Working at the Intersection of AI and SE". IEEE Software . Institute of Electrical and Electronics Engineers (IEEE). 37 (4): 26–35. doi:10.1109/ms.2020.2987666. ISSN   0740-7459. S2CID   220325485.
  18. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Hirschberg, Julia; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee; Shah, Julie; Tambe, Milind; Teller, Astro. "Defining AI". "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford, CA: Stanford University . Retrieved September 6, 2016.
  19. Press, Gil (2022). "The Trouble With AI: Human Intelligence". Forbes Magazine .
  20. 1 2 Bjola, Corneliu (2022). "AI for development: implications for theory and practice". Oxford Development Studies . Routledge. 50 (1): 78–90. doi: 10.1080/13600818.2021.1960960 . S2CID   238851395.
  21. Patty Tascarella (August 11, 2006). "Robotics firms find fundraising struggle, with venture capital shy". Pittsburgh Business Times .
  22. Flam, Faye (January 15, 2004). "A new robot makes a leap in brainpower". Philadelphia Inquirer . available from Philly.com
  23. Reuben L. Hann. (1998). "A Conversation with Herbert Simon". Gateway. IX (2): 12–13. Archived from the original on February 25, 2015. (Gateway is published by the Crew System Ergonomics Information Analysis Center, Wright-Patterson AFB)
  24. 1 2 Hernandez, Jose (2020). AI evaluation: On broken yardsticks and measurement scales. Workshop on Evaluating Evaluation of AI Systems, AAAI Conference on Artificial Intelligence. AAAI (Association for the Advancement of Artificial Intelligence). S2CID   228718653.
  25. Stone, Peter; Brooks, Rodney; Brynjolfsson, Erik; Calo, Ryan; Etzioni, Oren; Hager, Greg; Hirschberg, Julia; Kalyanakrishnan, Shivaram; Kamar, Ece; Kraus, Sarit; Leyton-Brown, Kevin; Parkes, David; Press, William; Saxenian, AnnaLee; Shah, Julie; Tambe, Milind; Teller, Astro. "The term AI has a clear meaning". "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford, CA: Stanford University . Retrieved September 6, 2016.
  26. 1 2 Vardi, Moshe (2012). "Artificial intelligence: past and future". Communications of the ACM . 55 (1): 5. doi: 10.1145/2063176.2063177 . S2CID   21144816.
  27. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (1 ed.). Oxford University Press (OUP). ISBN   978-0-19-967811-2. LCCN   2013955152.
  28. Reed, Fred (2006-04-14). "Promise of AI not so bright". The Washington Times .

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of living beings, primarily of humans. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

<span class="mw-page-title-main">Expert system</span> Computer system emulating the decision-making ability of a human expert

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.

Collaborative intelligence characterizes multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem solving network. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generation social networks to support collaborative intelligence, crowdsourcing individual expertise, preferences, and unique contributions in a problem solving process.

<i>The Age of Intelligent Machines</i> 1990 non-fiction book by Ray Kurzweil

The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

<span class="mw-page-title-main">Hubert Dreyfus's views on artificial intelligence</span> Overview of Hubert Dreyfuss views on artificial intelligence

Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

Moravec's paradox is the observation in artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

<span class="mw-page-title-main">Computational creativity</span> Multidisciplinary endeavour

Computational creativity is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.

<span class="mw-page-title-main">Progress in artificial intelligence</span> How AI-related technologies evolve

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, economic-financial applications, robot control, law, scientific discovery, video games, and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

References

Further reading

A bachelor's thesis but cited by A. Poggi; G. Rimassa; P. Turci (October 2002). "What Agent Middleware Can (And Should) Do For You". Applied Artificial Intelligence . 16 (9–10): 677–698. doi:10.1080/08839510290030444. ISSN   0883-9514. Wikidata   Q58188053.