AI-complete

Last updated

In the field of artificial intelligence (AI), tasks that are hypothesised to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. [1] Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.

Contents

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. [2]

AI-complete problems could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks. [3] [4]

History

The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. [5] Early uses of the term are in Erik Mueller's 1987 PhD dissertation [6] and in Eric Raymond's 1991 Jargon File. [7]

Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations. [8]

DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens." [9] Similarly, some tasks once considered to be AI-complete, like machine translation, [10] are among the capabilities of large language models. [11]

AI-complete problems

AI-complete problems are hypothesized to include:

Formalization

Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterised formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable to formally define AI-completeness.

To address this problem, a complexity theory for AI has been proposed. [20] It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined.

The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair , where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.

Results

The complexity of solving the following problems with a human-assisted Turing machine is: [20]

Research

Roman Yamopolsky [21] suggests that a problem is AI-Complete if it has two properties:

On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem.

Yamopolsky [22] has also hypothesized that the Turing Test is a defining feature of AI-completeness.

Groppe and Jain [23] classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst, [24] getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of strong AI, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence.

For Kwee-Bintoro and Velez, [25] solving AI-complete problems would have strong repercussions on the society.

See also

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

<span class="mw-page-title-main">Quantum computing</span> Technology that uses quantum mechanics

A quantum computer is a computer that takes advantage of quantum mechanical phenomena.

In computational complexity theory, a decision problem is P-complete if it is in P and every problem in P can be reduced to it by an appropriate reduction.

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

This glossary of quantum computing is a list of definitions of terms and concepts used in quantum computing, its sub-disciplines, and related fields.

References

  1. Shapiro, Stuart C. (1992). Artificial Intelligence Archived 2016-02-01 at the Wayback Machine In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
  2. Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf Archived 2013-05-22 at the Wayback Machine
  3. Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security Archived 2016-03-04 at the Wayback Machine . In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.
  4. Bergmair, Richard (January 7, 2006). "Natural Language Steganography and an "AI-complete" Security Primitive". CiteSeerX   10.1.1.105.129 .{{cite journal}}: Cite journal requires |journal= (help) (unpublished?)
  5. Mallery, John C. (1988), "Thinking About Foreign Policy: Finding an Appropriate Role for Artificially Intelligent Computers", The 1988 Annual Meeting of the International Studies Association., St. Louis, MO, archived from the original on 2008-02-29, retrieved 2007-04-27{{citation}}: CS1 maint: location missing publisher (link).
  6. Mueller, Erik T. (1987, March). Daydreaming and Computation (Technical Report CSD-870017) Archived 2020-10-30 at the Wayback Machine PhD dissertation, University of California, Los Angeles. ("Daydreaming is but one more AI-complete problem: if we could solve anyone artificial intelligence problem, we could solve all the others", p. 302)
  7. Raymond, Eric S. (1991, March 22). Jargon File Version 2.8.1 Archived 2011-06-04 at the Wayback Machine (Definition of "AI-complete" first added to jargon file.)
  8. Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, pp. 1–5
  9. "A Generalist Agent". www.deepmind.com. Archived from the original on 2022-08-02. Retrieved 2022-05-26.
  10. Katz, Miranda. "Welcome to the Era of the AI Coworker | Backchannel". Wired. ISSN   1059-1028 . Retrieved 2024-04-28.
  11. "Unveiling the Power of Large Language Models (LLMs)". www.unite.ai. Retrieved 2024-04-28.
  12. Stockton, Nick. "If AI Can Fix Peer Review in Science, AI Can Do Anything". Wired. ISSN   1059-1028 . Retrieved 2024-04-27.
  13. Šekrst, Kristina (2020), Skansi, Sandro (ed.), "AI-Completeness: Using Deep Learning to Eliminate the Human Factor", Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, Cham: Springer International Publishing, pp. 117–130, doi:10.1007/978-3-030-37591-1_11, ISBN   978-3-030-37591-1 , retrieved 2024-03-25
  14. Strat, Thomas M.; Chellappa, Rama; Patel, Vishal M. (2020). "Vision and robotics". AI Magazine. 42 (2): 49–65. doi: 10.1609/aimag.v41i2.5299 . S2CID   220687545 via ABI/INFORM Collection.
  15. Krestel, Ralf; Aras, Hidir; Andersson, Linda; Piroi, Florina; Hanbury, Allan; Alderucci, Dean (2022-07-06). "3rd Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech2022)". Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Madrid Spain: ACM. pp. 3474–3477. doi:10.1145/3477495.3531702. ISBN   978-1-4503-8732-3. S2CID   250340282. Archived from the original on 2023-04-15. Retrieved 2023-04-15.
  16. Orynycz, Petro (2022), Degen, Helmut; Ntoa, Stavroula (eds.), "Say It Right: AI Neural Machine Translation Empowers New Speakers to Revitalize Lemko", Artificial Intelligence in HCI, Lecture Notes in Computer Science, vol. 13336, Cham: Springer International Publishing, pp. 567–580, doi:10.1007/978-3-031-05643-7_37, ISBN   978-3-031-05642-0 , retrieved 2023-04-15
  17. Ide, N.; Veronis, J. (1998). "Introduction to the special issue on word sense disambiguation: the state of the art" (PDF). Computational Linguistics. 24 (1): 2–40. Archived (PDF) from the original on 2022-10-09.
  18. Musk, Elon (April 14, 2022). "Elon Musk talks Twitter, Tesla and how his brain works — live at TED2022". TED (conference) (Interview). Interviewed by Chris_Anderson_(entrepreneur). Vancouver. Archived from the original on December 15, 2022. Retrieved December 15, 2022.
  19. Šekrst, Kristina (2020), "Chapter 11 - AI-Completeness: Using Deep Learning to Eliminate the Human Factor", in Skansi, Sandro (ed.), Guide to Deep Learning Basics, Springer, ISBN   978-3-030-37591-1
  20. 1 2 Dafna Shahaf and Eyal Amir (2007) Towards a theory of AI completeness Archived 2020-11-07 at the Wayback Machine . Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning Archived 2021-01-19 at the Wayback Machine .
  21. Yampolskiy, Roman (2012), "AI-Complete, AI-Hard, or AI-Easy – Classification of Problems in AI" (PDF), 23rd Midwest Artificial Intelligence and Cognitive Science Conference, MAICS 2012, Cincinnati, Ohio, USA, 21-22 April 2012, retrieved 2024-04-05
  22. Yampolskiy, Roman (2013), "Turing Test as a Defining Feature of AI-Completeness", Artificial Intelligence, Evolutionary Computing and Metaheuristics, Studies in Computational Intelligence, vol. 427, pp. 3–17, doi:10.1007/978-3-642-29694-9_1, ISBN   978-3-642-29693-2
  23. Groppe, Sven; Jain, Sarika (2024), "The Way Forward with AI-Complete Problems", New Generation Computing, 42: 1–5, doi:10.1007/s00354-024-00251-8
  24. Šekrst, Kristina (2020), Skansi, Sandro (ed.), "AI-Completeness: Using Deep Learning to Eliminate the Human Factor", Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, Cham: Springer International Publishing, pp. 117–130, doi:10.1007/978-3-030-37591-1_11, ISBN   978-3-030-37591-1 , retrieved 2024-04-05
  25. Bintoro, Ted; Velez, Noah (2022), "AI-Complete: What it Means to Be Human in an Increasingly Computerized World", Bridging Human Intelligence and Artificial Intelligence, Educational Communications and Technology: Issues and Innovations, Cham: Springer, pp. 257–274, doi:10.1007/978-3-030-84729-6_18, ISBN   978-3-030-84728-9