ASR-complete

Last updated

ASR-complete is, by analogy to "NP-completeness" in complexity theory, a term to indicate that the difficulty of a computational problem is equivalent to solving the central automatic speech recognition problem, i.e. recognize and understanding spoken language. Unlike "NP-completeness", this term is typically used informally.

Contents

Such problems are hypothesised to include:

These problems are easy for humans to do (in fact, they are described directly in terms of imitating humans). Some systems can solve very simple restricted versions of these problems, but none can solve them in their full generality.

See also

Related Research Articles

In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.

The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

Natural language processing (NLP) is an interdisciplinary subfield of computer science and information retrieval. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. To this end, natural language processing often borrows ideas from theoretical linguistics. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

<span class="mw-page-title-main">Stephen Cook</span> American-Canadian computer scientist, contributor to complexity theory

Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at the University of Toronto, Department of Computer Science and Department of Mathematics.

<span class="mw-page-title-main">NP-hardness</span> Complexity class

In computational complexity theory, a computational problem H is called NP-hard if, for every problem L which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from L to H. That is, assuming a solution for H takes 1 unit time, H's solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected that P≠NP, it is unlikely that such an algorithm exists. It is suspected that there are no polynomial-time algorithms for NP-hard problems, but that has not been proven. A simple example of an NP-hard problem is the subset sum problem.

In computational complexity theory, a decision problem is P-complete if it is in P and every problem in P can be reduced to it by an appropriate reduction.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

<span class="mw-page-title-main">Dialogue system</span>

A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human. Dialogue systems employed one or more of text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.

Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system.

A spoken dialog system (SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written text dialog system: a speech recognizer and a text-to-speech module. It can be further distinguished from command and control speech systems that can respond to requests but do not attempt to maintain continuity over time.

James Frederick Allen is an American computational linguist recognized for his contributions to temporal logic, in particular Allen's interval algebra. He is interested in knowledge representation, commonsense reasoning, and natural language understanding, believing that "deep language understanding can only currently be achieved by significant hand-engineering of semantically-rich formalisms coupled with statistical preferences". He is the John H. Dessaurer Professor of Computer Science at the University of Rochester

Fluency refers to continuity, smoothness, rate, and effort in speech production. It is also used to characterize language production, language ability or language proficiency.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

The following outline is provided as an overview of and topical guide to natural-language processing:

Computational heuristic intelligence (CHI) refers to specialized programming techniques in computational intelligence. These techniques have the express goal of avoiding complexity issues, also called NP-hard problems, by using human-like techniques. They are best summarized as the use of exemplar-based methods (heuristics), rather than rule-based methods (algorithms). Hence the term is distinct from the more conventional computational algorithmic intelligence, or symbolic AI. An example of a CHI technique is the encoding specificity principle of Tulving and Thompson. In general, CHI principles are problem solving techniques used by people, rather than programmed into machines. It is by drawing attention to this key distinction that the use of this term is justified in a field already replete with confusing neologisms. Note that the legal systems of all modern human societies employ both heuristics from individual trial records as well as legislated statutes (rules) as regulatory guides.

References