Weak artificial intelligence

Last updated

Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI, [1] [2] [3] is focused on one narrow task.

Contents

In John Searle's terms it “would be useful for testing hypotheses about minds, but would not be minds”. [4] Weak AI focuses on mimicking how humans perform[ dubious ] basic actions such as remembering things, perceiving things, and solving simple problems. [5] As opposed to strong AI, which uses technology to be able to think and learn on its own. Computers can use methods such as algorithms and prior knowledge to develop their ways of thinking as human beings do. [5] Strong AI systems are learning how to run independently of the programmers who programmed them. Weak AI is not able to have a mind of its own, and can only imitate physical behaviors that it can observe.[ dubious ] [6]

Weak AI is contrasted with strong AI, which is defined variously as:

Scholars like Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" [7] (as, on the other hand, implied by the strong AI assumption).

Narrow AI can be classified as being “... limited to a single, narrowly defined task. Most modern AI systems would be classified in this category.” [8] Narrow means the robot or computer is strictly limited to only being able to solve one problem at a time. Strong AI is conversely the opposite. Strong AI is closer to the human brain. This is all believed to be the case by philosopher John Searle. This idea of strong AI is also controversial. Searle believes that the Turing test (created by Alan Turing during WW2, originally called the Imitation Game, used to test if a machine is as intelligent as a human) is not accurate or appropriate for testing strong AI. [9]

Weak AI vs. strong A.I.

The differences between weak AI vs. strong AI are not widely cataloged out there at the moment. Weak AI is commonly associated with basic technology like voice-recognition software such as Siri or Alexa as mentioned in Terminology. Whereas strong AI is not fully implemented or testable yet, it is only really fantasized about in movies or popular culture media. [10]

It seems that one approach to AI moving forward is one of an assisting or aiding role to humans. There are some sets of data or numbers that even we humans cannot fully process or understand as quickly as computers can, so this is where AI will play a helping role for us. [11] [ relevant? ]

Impact

Some commentators[ who? ] think narrow AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Narrow AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. [1]

Examples

Some examples of narrow AI are AlphaGo, [12] self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. [13] Another issue with narrow AI, currently, is that behavior that it follows can become inconsistent. [14] It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments.

Simple AI programs have already worked their way into our society and we just might not have noticed it yet. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are just to name a few. [15] As much as narrow and relatively general AI is slowly starting to help out societies, they are also starting to hurt them as well. AI had already unfairly put people in jail, discriminated against women in the workplace for hiring, taught some problematic ideas to millions, and even killed people with automatic cars. [16] AI might be a powerful tool that can be used for improving our lives, but it could also be a dangerous technology with the potential for things to get out of hand.  

Social media

Facebook, and other similar social media platforms, have been able to figure out how to use AI and machine learning, or more specifically narrow AI, to predict how people will react to being shown certain images. Narrow AI systems have been able to identify what users will engage with, based on what they post, following the patterns or trends. [17]

Twitter has started to have more advanced AI systems to figure out how to identify narrower AI forms and detect if bots may have been used for biased propaganda, or even potentially malicious intentions. These AI systems do this through filtering words and creating different layers of conditions based on what AI has had implications for in the past, and then detecting if that account may be a bot or not. [18]

TikTok uses its "For You" algorithm to determine a user's interests very quickly through analyzing patterns in what videos the user initially chooses to watch. This narrow AI system uses patterns found between videos to determine what video should be shown next including the duration, who has shared or commented on it already, and music played in the videos. The "For You" algorithm on TikTok is so accurate, that it can figure out exactly what a user has an interest in or even really loves, in less than an hour. [19]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems, as opposed to the natural intelligence of living beings. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. Philosopher John Searle presented the argument in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978) presented similar arguments. Searle's version has been widely discussed in the years since. The centerpiece of Searle's argument is a thought experiment known as the Chinese room.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

"Computing Machinery and Intelligence" is a seminal paper written by Alan Turing on the topic of artificial intelligence. The paper, published in 1950 in Mind, was the first to introduce his concept of what is now known as the Turing test to the general public.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks, as opposed to narrow AI, which is designed for specific tasks. It is one of various definitions of strong AI.

Synthetic intelligence (SI) is an alternative/opposite term for artificial intelligence emphasizing that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence. John Haugeland proposes an analogy with simulated diamonds and synthetic diamonds—only the synthetic diamond is truly a diamond. Synthetic means that which is produced by synthesis, combining parts to form a whole; colloquially, a human-made version of that which has arisen naturally. A "synthetic intelligence" would therefore be or appear human-made, but not a simulation.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In intelligence and artificial intelligence, an intelligent agent (IA) is an agent acting in an intelligent manner; It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Artificial intelligence (AI) has been used in applications throughout industry and academia. Similar to electricity or computers, AI serves as a general-purpose technology that has numerous applications. Its applications span language translation, image recognition, decision-making, credit scoring, e-commerce and various other domains.

<span class="mw-page-title-main">Progress in artificial intelligence</span> How AI-related technologies evolve

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, economic-financial applications, robot control, law, scientific discovery, video games, and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

<span class="mw-page-title-main">Turing test</span> Test of a machines ability to imitate human intelligence

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

Artificial intelligence and music (AIM) is a common subject in the International Computer Music Conference, the Computing Society Conference and the International Joint Conference on Artificial Intelligence. The first International Computer Music Conference (ICMC) was held in 1974 at Michigan State University. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

References

  1. 1 2 Dvorsky, George (1 April 2013). "How Much Longer Before Our First AI Catastrophe?". Gizmodo. Retrieved 27 November 2021.
  2. Muehlhauser, Luke (18 October 2013). "Ben Goertzel on AGI as a Field". Machine Intelligence Research Institute. Retrieved 27 November 2021.
  3. Chalfen, Mike (15 October 2015). "The Challenges Of Building AI Apps". TechCrunch. Retrieved 27 November 2021.
  4. The Cambridge handbook of artificial intelligence. Frankish, Keith., Ramsey, William M., 1960-. Cambridge, UK. 12 June 2014. p. 342. ISBN   978-0-521-87142-6. OCLC   865297798.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  5. 1 2 Chandler, Daniel; Munday, Rod (2020). A Dictionary of Media and Communication. Oxford University Press. doi:10.1093/acref/9780198841838.001.0001. ISBN   978-0-19-884183-8.
  6. Colman, Andrew M. (2015). A dictionary of psychology (4th ed.). Oxford. ISBN   978-0-19-965768-1. OCLC   896901441.{{cite book}}: CS1 maint: location missing publisher (link)
  7. Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor & Francis. p. 85. ISBN   9781138207929.
  8. Bartneck, Christoph; Lütge, Christoph; Wagner, Alan; Welsh, Sean (2021). An Introduction to Ethics in Robotics and AI. SpringerBriefs in Ethics. Cham: Springer International Publishing. doi:10.1007/978-3-030-51110-4. ISBN   978-3-030-51109-8. S2CID   224869294.
  9. Liu, Bin (28 March 2021). ""Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest Value for us?". arXiv: 2103.15294 [cs.AI].
  10. Kerns, Jeff (15 February 2017). "What's the Difference Between Weak and Strong AI?". ProQuest. ProQuest   1876870051.
  11. LaPlante, Alice; Maliha, Balala (2018). Solving Quality and Maintenance Problems with AI. O'Reilly Media, Inc. ISBN   9781491999561.
  12. Edelman, Gary Grossman (3 September 2020). "We're entering the AI twilight zone between narrow and general AI". VentureBeat. Retrieved 16 March 2024.
  13. Szocik, Konrad; Jurkowska-Gomułka, Agata (16 December 2021). "Ethical, Legal and Political Challenges of Artificial Intelligence: Law as a Response to AI-Related Threats and Hopes". World Futures: 1–17. doi:10.1080/02604027.2021.2012876. ISSN   0260-4027. S2CID   245287612.
  14. Kuleshov, Andrey; Prokhorov, Sergei (September 2019). "Domain Dependence of Definitions Required to Standardize and Compare Performance Characteristics of Weak AI Systems". 2019 International Conference on Artificial Intelligence: Applications and Innovations (IC-AIAI). Belgrade, Serbia: IEEE. pp. 62–623. doi:10.1109/IC-AIAI48757.2019.00020. ISBN   978-1-7281-4326-2. S2CID   211298012.
  15. Earley, Seth (2017). "The Problem With AI". IT Professional. 19 (4): 63–67. doi:10.1109/MITP.2017.3051331. ISSN   1520-9202. S2CID   9382416.
  16. Anirudh, Koul; Siddha, Ganju; Meher, Kasam (2019). Practical Deep Learning for Cloud, Mobile, and Edge. O'Reilly Media. ISBN   9781492034865.
  17. Kaiser, Carolin; Ahuvia, Aaron; Rauschnabel, Philipp A.; Wimble, Matt (1 September 2020). "Social media monitoring: What can marketers learn from Facebook brand photos?". Journal of Business Research. 117: 707–717. doi:10.1016/j.jbusres.2019.09.017. ISSN   0148-2963. S2CID   203444643.
  18. Shukla, Rachit; Sinha, Adwitiya; Chaudhary, Ankit (28 February 2022). "TweezBot: An AI-Driven Online Media Bot Identification Algorithm for Twitter Social Networks". Electronics. 11 (5): 743. doi: 10.3390/electronics11050743 . ISSN   2079-9292.
  19. Hyunjin, Kang (September 2022). "AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement". academic.oup.com. Retrieved 8 November 2022.