Weak artificial intelligence

Last updated

Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as Artificial Narrow Intelligece, [1] [2] [3] is focused on one narrow task.

Contents

Weak AI is contrasted with strong AI, which can be interpreted in various ways:

Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." [4] Artificial general intelligence is conversely the opposite.

Applications and risks

Some examples of narrow AI are AlphaGo, [5] self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. [6] It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. [7]

Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. [1] Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. [8]

Simple AI programs have already worked their way into our society unnoticed. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. [9] As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars. [10] AI might be a powerful tool that can be used for improving lives, but it could also be a dangerous technology with the potential for misuse.

Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. [11] For instance, TikTok's "For You" algorithm can determine user's interests or preferences in less than an hour. [12] Some other social media AI systems are used to detect bots that may be involved in biased propaganda or other potentially malicious activities. [13]

Weak AI versus strong AI

John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". [14]

Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" [15] (as, on the other hand, implied by the strong AI assumption).

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.

The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences. Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.

"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In intelligence and artificial intelligence, an intelligent agent (IA) is an agent that perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

The following outline is provided as an overview of and topical guide to artificial intelligence:

<span class="mw-page-title-main">Progress in artificial intelligence</span> How AI-related technologies evolve

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

References

  1. 1 2 Dvorsky, George (1 April 2013). "How Much Longer Before Our First AI Catastrophe?". Gizmodo. Retrieved 27 November 2021.
  2. Muehlhauser, Luke (18 October 2013). "Ben Goertzel on AGI as a Field". Machine Intelligence Research Institute. Retrieved 27 November 2021.
  3. Chalfen, Mike (15 October 2015). "The Challenges Of Building AI Apps". TechCrunch. Retrieved 27 November 2021.
  4. Bartneck, Christoph; Lütge, Christoph; Wagner, Alan; Welsh, Sean (2021). An Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Cham: Springer International Publishing. doi:10.1007/978-3-030-51110-4. ISBN   978-3-030-51109-8. S2CID   224869294.
  5. Edelman, Gary Grossman (3 September 2020). "We're entering the AI twilight zone between narrow and general AI". VentureBeat. Retrieved 16 March 2024.
  6. Kuleshov, Andrey; Prokhorov, Sergei (September 2019). "Domain Dependence of Definitions Required to Standardize and Compare Performance Characteristics of Weak AI Systems". 2019 International Conference on Artificial Intelligence: Applications and Innovations (IC-AIAI). Belgrade, Serbia: IEEE. pp. 62–623. doi:10.1109/IC-AIAI48757.2019.00020. ISBN   978-1-7281-4326-2. S2CID   211298012.
  7. Staff, Bulletin (23 April 2018). "The promise and peril of military applications of artificial intelligence". Bulletin of the Atomic Scientists. Retrieved 2 October 2024.
  8. Szocik, Konrad; Jurkowska-Gomułka, Agata (16 December 2021). "Ethical, Legal and Political Challenges of Artificial Intelligence: Law as a Response to AI-Related Threats and Hopes". World Futures: 1–17. doi:10.1080/02604027.2021.2012876. ISSN   0260-4027. S2CID   245287612.
  9. Earley, Seth (2017). "The Problem With AI". IT Professional. 19 (4): 63–67. doi:10.1109/MITP.2017.3051331. ISSN   1520-9202. S2CID   9382416.
  10. Anirudh, Koul; Siddha, Ganju; Meher, Kasam (2019). Practical Deep Learning for Cloud, Mobile, and Edge. O'Reilly Media. ISBN   9781492034865.
  11. Kaiser, Carolin; Ahuvia, Aaron; Rauschnabel, Philipp A.; Wimble, Matt (1 September 2020). "Social media monitoring: What can marketers learn from Facebook brand photos?". Journal of Business Research. 117: 707–717. doi:10.1016/j.jbusres.2019.09.017. ISSN   0148-2963. S2CID   203444643.
  12. Hyunjin, Kang (September 2022). "AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement". Journal of Computer-Mediated Communication. 27 (5). doi:10.1093/jcmc/zmac014 . Retrieved 8 November 2022.
  13. Shukla, Rachit; Sinha, Adwitiya; Chaudhary, Ankit (28 February 2022). "TweezBot: An AI-Driven Online Media Bot Identification Algorithm for Twitter Social Networks". Electronics. 11 (5): 743. doi: 10.3390/electronics11050743 . ISSN   2079-9292.
  14. Liu, Bin (28 March 2021). ""Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest Value for us?". arXiv: 2103.15294 [cs.AI].
  15. Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor & Francis. p. 85. ISBN   9781138207929.