AI-complete

Last updated

In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. [1] Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.

Contents

In the past, problems supposed to be AI-complete included computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. [2] AI-complete were notably considered useful for testing the presence of humans, as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks. [3] [4]

History

The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. [5] Early uses of the term are in Erik Mueller's 1987 PhD dissertation [6] and in Eric Raymond's 1991 Jargon File. [7]

Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations. [8]

DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens." [9] Similarly, some tasks once considered to be AI-complete, like machine translation, [10] are among the capabilities of large language models. [11]

AI-complete problems

AI-complete problems have been hypothesized to include:

Formalization

Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterized formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable a formal definition of AI-completeness.

Research

Roman Yampolskiy [20] suggests that a problem is AI-Complete if it has two properties:

On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem.

Yampolskiy [21] has also hypothesized that the Turing Test is a defining feature of AI-completeness.

Groppe and Jain [22] classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst, [13] getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of artificial general intelligence, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence.

For Kwee-Bintoro and Velez, [23] solving AI-complete problems would have strong repercussions on the society.

See also

Related Research Articles

The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine.

<span class="mw-page-title-main">Quantum computing</span> Computer hardware technology that uses quantum mechanics

A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. Theoretically a large-scale quantum computer could break some widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

Natural language understanding (NLU) or natural language interpretation (NLI) is a subset of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU has been considered an AI-hard problem.

In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

<span class="mw-page-title-main">PH (complexity)</span> Class in computational complexity theory

In computational complexity theory, the complexity class PH is the union of all complexity classes in the polynomial hierarchy:

<span class="mw-page-title-main">PP (complexity)</span> Class of problems in computer science

In complexity theory, PP, or PPT is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977.

In computational complexity theory, the complexity class FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. It is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization.

In the mathematical discipline of graph theory, a feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles. Equivalently, each FVS contains at least one vertex of any cycle in the graph. The feedback vertex set number of a graph is the size of a smallest feedback vertex set. The minimum feedback vertex set problem is an NP-complete problem; it was among the first problems shown to be NP-complete. It has wide applications in operating systems, database systems, and VLSI chip design.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

<span class="mw-page-title-main">Roman Yampolskiy</span> Latvian computer scientist (born 1979)

Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.

This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.

References

  1. Shapiro, Stuart C. (1992). Artificial Intelligence Archived 2016-02-01 at the Wayback Machine In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
  2. Yampolskiy, Roman (January 2013). "Turing Test as a Defining Feature of AI-Completeness" (PDF). Artificial Intelligence, Evolutionary Computing and Metaheuristics. Archived from the original (PDF) on 2013-05-22.
  3. Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security Archived 2016-03-04 at the Wayback Machine . In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.
  4. Bergmair, Richard (January 7, 2006). "Natural Language Steganography and an "AI-complete" Security Primitive". CiteSeerX   10.1.1.105.129 .{{cite journal}}: Cite journal requires |journal= (help) (unpublished?)
  5. Mallery, John C. (1988), "Thinking About Foreign Policy: Finding an Appropriate Role for Artificially Intelligent Computers", The 1988 Annual Meeting of the International Studies Association., St. Louis, MO, archived from the original on 2008-02-29, retrieved 2007-04-27{{citation}}: CS1 maint: location missing publisher (link).
  6. Mueller, Erik T. (1987, March). Daydreaming and Computation (Technical Report CSD-870017) Archived 2020-10-30 at the Wayback Machine PhD dissertation, University of California, Los Angeles. ("Daydreaming is but one more AI-complete problem: if we could solve anyone artificial intelligence problem, we could solve all the others", p. 302)
  7. Raymond, Eric S. (1991, March 22). Jargon File Version 2.8.1 Archived 2011-06-04 at the Wayback Machine (Definition of "AI-complete" first added to jargon file.)
  8. Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, pp. 1–5
  9. "A Generalist Agent". www.deepmind.com. Archived from the original on 2022-08-02. Retrieved 2022-05-26.
  10. Katz, Miranda. "Welcome to the Era of the AI Coworker | Backchannel". Wired. ISSN   1059-1028 . Retrieved 2024-04-28.
  11. "Unveiling the Power of Large Language Models (LLMs)". www.unite.ai. Retrieved 2024-04-28.
  12. Stockton, Nick. "If AI Can Fix Peer Review in Science, AI Can Do Anything". Wired. ISSN   1059-1028 . Retrieved 2024-04-27.
  13. 1 2 Šekrst, Kristina (2020), Skansi, Sandro (ed.), "AI-Completeness: Using Deep Learning to Eliminate the Human Factor", Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, Cham: Springer International Publishing, pp. 117–130, doi:10.1007/978-3-030-37591-1_11, ISBN   978-3-030-37591-1 , retrieved 2024-04-05
  14. Strat, Thomas M.; Chellappa, Rama; Patel, Vishal M. (2020). "Vision and robotics". AI Magazine. 42 (2): 49–65. doi: 10.1609/aimag.v41i2.5299 . S2CID   220687545 via ABI/INFORM Collection.
  15. Krestel, Ralf; Aras, Hidir; Andersson, Linda; Piroi, Florina; Hanbury, Allan; Alderucci, Dean (2022-07-06). "3rd Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech2022)". Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Madrid Spain: ACM. pp. 3474–3477. doi:10.1145/3477495.3531702. ISBN   978-1-4503-8732-3. S2CID   250340282. Archived from the original on 2023-04-15. Retrieved 2023-04-15.
  16. Orynycz, Petro (2022), Degen, Helmut; Ntoa, Stavroula (eds.), "Say It Right: AI Neural Machine Translation Empowers New Speakers to Revitalize Lemko", Artificial Intelligence in HCI, Lecture Notes in Computer Science, vol. 13336, Cham: Springer International Publishing, pp. 567–580, doi:10.1007/978-3-031-05643-7_37, ISBN   978-3-031-05642-0 , retrieved 2023-04-15
  17. Ide, N.; Veronis, J. (1998). "Introduction to the special issue on word sense disambiguation: the state of the art" (PDF). Computational Linguistics. 24 (1): 2–40. Archived (PDF) from the original on 2022-10-09.
  18. Musk, Elon (April 14, 2022). "Elon Musk talks Twitter, Tesla and how his brain works — live at TED2022". TED (conference) (Interview). Interviewed by Chris_Anderson_(entrepreneur). Vancouver. Archived from the original on December 15, 2022. Retrieved December 15, 2022.
  19. Šekrst, Kristina (2020), "Chapter 11 - AI-Completeness: Using Deep Learning to Eliminate the Human Factor", in Skansi, Sandro (ed.), Guide to Deep Learning Basics, Springer, ISBN   978-3-030-37591-1
  20. Yampolskiy, Roman (2012), "AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI" (PDF), 23rd Midwest Artificial Intelligence and Cognitive Science Conference, MAICS 2012, Cincinnati, Ohio, USA, 21-22 April 2012, retrieved 2024-04-05
  21. Yampolskiy, Roman (2013), "Turing Test as a Defining Feature of AI-Completeness", Artificial Intelligence, Evolutionary Computing and Metaheuristics, Studies in Computational Intelligence, vol. 427, pp. 3–17, doi:10.1007/978-3-642-29694-9_1, ISBN   978-3-642-29693-2
  22. Groppe, Sven; Jain, Sarika (2024), "The Way Forward with AI-Complete Problems", New Generation Computing, 42: 1–5, doi:10.1007/s00354-024-00251-8
  23. Kwee-Bintoro, Ted; Velez, Noah (2022), "AI-Complete: What it Means to Be Human in an Increasingly Computerized World", Bridging Human Intelligence and Artificial Intelligence, Educational Communications and Technology: Issues and Innovations, Cham: Springer, pp. 257–274, doi:10.1007/978-3-030-84729-6_18, ISBN   978-3-030-84728-9