Natural-language understanding

Last updated

Natural-language understanding (NLU) or natural-language interpretation (NLI) [1] is a subset of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem. [2]

Contents

There is considerable commercial interest in the field because of its application to automated reasoning, [3] machine translation, [4] question answering, [5] news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis.

History

The program STUDENT, written in 1964 by Daniel Bobrow for his PhD dissertation at MIT, is one of the earliest known attempts at natural-language understanding by a computer. [6] [7] [8] [9] [10] Eight years after John McCarthy coined the term artificial intelligence, Bobrow's dissertation (titled Natural Language Input for a Computer Problem Solving System) showed how a computer could understand simple natural language input to solve algebra word problems.

A year later, in 1965, Joseph Weizenbaum at MIT wrote ELIZA, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a database of real-world knowledge or a rich lexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by Ask.com. [11]

In 1969, Roger Schank at Stanford University introduced the conceptual dependency theory for natural-language understanding. [12] This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input. [13] Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.

In 1971, Terry Winograd finished writing SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field. [14] [15] Winograd continued to be a major influence in the field with the publication of his book Language as a Cognitive Process. [16] At Stanford, Winograd would later advise Larry Page, who co-founded Google.

In the 1970s and 1980s, the natural language processing group at SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, e.g., in 1982 Gary Hendrix formed Symantec Corporation originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-driven graphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time, e.g., Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp. [17] [18] In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert. [19]

The third millennium saw the introduction of systems using machine learning for text classification, such as the IBM Watson. However, experts debate how much "understanding" such systems demonstrate: e.g., according to John Searle, Watson did not even understand the questions. [20]

John Ball, cognitive scientist and inventor of the Patom Theory, supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing.[ citation needed ] According to Wibe Wagemans, "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork." [21]

Scope and context

The umbrella term "natural-language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instance text classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text, [22] but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.

Throughout the years various attempts at processing natural language or English-like sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example, Wayne Ratliff originally developed the Vulcan program with an English-like syntax to mimic the English speaking computer in Star Trek. Vulcan later became the dBase system whose easy-to-use syntax effectively launched the personal computer database industry. [23] [24] Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a rich lexicon and include an internal representation (often as first order logic) of the semantics of natural language sentences.

Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest, English-like command interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding, [25] but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity, [26] but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.

Components and architecture

Regardless of the approach used, most natural-language-understanding systems share some common components. The system needs a lexicon of the language and a parser and grammar rules to break sentences into an internal representation. The construction of a rich lexicon with a suitable ontology requires significant effort, e.g., the Wordnet lexicon required many person-years of effort. [27]

The system also needs theory from semantics to guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation. [28] These range from naive semantics or stochastic semantic analysis to the use of pragmatics to derive meaning from context. [29] [30] [31] Semantic parsers convert natural-language texts into formal meaning representations. [32]

Advanced applications of natural-language understanding also attempt to incorporate logical inference within their framework. This is generally achieved by mapping the derived meaning into a set of assertions in predicate logic, then using logical deduction to arrive at conclusions. Therefore, systems based on functional languages such as Lisp need to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the language Prolog generally rely on an extension of the built-in logical representation framework. [33] [34]

The management of context in natural-language understanding can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to the formal modeling of context, each with specific strengths and weaknesses. [35] [36]

See also

Notes

  1. Semaan, P. (2012). Natural Language Generation: An Overview. Journal of Computer Science & Research (JCSCR)-ISSN, 50-57
  2. Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness . In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3-17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf
  3. Van Harmelen, Frank, Vladimir Lifschitz, and Bruce Porter, eds. Handbook of knowledge representation. Vol. 1. Elsevier, 2008.
  4. Macherey, Klaus, Franz Josef Och, and Hermann Ney. "Natural language understanding using statistical machine translation." Seventh European Conference on Speech Communication and Technology. 2001.
  5. Hirschman, Lynette, and Robert Gaizauskas. "Natural language question answering: the view from here." natural language engineering 7.4 (2001): 275-300.
  6. American Association for Artificial Intelligence Brief History of AI
  7. Daniel Bobrow's PhD Thesis Natural Language Input for a Computer Problem Solving System.
  8. Machines who think by Pamela McCorduck 2004 ISBN   1-56881-205-1 page 286
  9. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach Prentice Hall, ISBN   0-13-790395-2, http://aima.cs.berkeley.edu/, p. 19
  10. Computer Science Logo Style: Beyond programming by Brian Harvey 1997 ISBN   0-262-58150-7 page 278
  11. Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation W. H. Freeman and Company. ISBN   0-7167-0463-3 pages 188-189
  12. Roger Schank, 1969, A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3
  13. Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606
  14. Artificial intelligence: critical concepts, Volume 1 by Ronald Chrisley, Sander Begeer 2000 ISBN   0-415-19332-X page 89
  15. Terry Winograd's SHRDLU page at Stanford SHRDLU
  16. Winograd, Terry (1983), Language as a Cognitive Process, Addison–Wesley, Reading, MA.
  17. Larry R. Harris, Research at the Artificial Intelligence corp. ACM SIGART Bulletin, issue 79, January 1982
  18. Inside case-based reasoning by Christopher K. Riesbeck, Roger C. Schank 1989 ISBN   0-89859-767-6 page xiii
  19. In Depth Understanding: A Model of Integrated Process for Narrative Comprehension.. Michael G. Dyer. MIT Press. ISBN   0-262-04073-5
  20. Searle, John (23 February 2011). "Watson Doesn't Know It Won on 'Jeopardy!'". Wall Street Journal.
  21. Brandon, John (2016-07-12). "What Natural Language Understanding tech means for chatbots". VentureBeat. Retrieved 2024-02-29.
  22. An approach to hierarchical email categorization by Peifeng Li et al. in Natural language processing and information systems edited by Zoubida Kedad, Nadira Lammari 2007 ISBN   3-540-73350-7
  23. InfoWorld, Nov 13, 1989, page 144
  24. InfoWorld, April 19, 1984, page 71
  25. Building Working Models of Full Natural-Language Understanding in Limited Pragmatic Domains by James Mason 2010
  26. Mining the Web: discovering knowledge from hypertext data by Soumen Chakrabarti 2002 ISBN   1-55860-754-4 page 289
  27. G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller. 1990. WordNet: An online lexical database. Int. J. Lexicograph. 3, 4, pp. 235-244.
  28. Using computers in linguistics: a practical guide by John Lawler, Helen Aristar Dry 198 ISBN   0-415-16792-2 page 209
  29. Naive semantics for natural language understanding by Kathleen Dahlgren 1988 ISBN   0-89838-287-4
  30. Stochastically-based semantic analysis by Wolfgang Minker, Alex Waibel, Joseph Mariani 1999 ISBN   0-7923-8571-3
  31. Pragmatics and natural language understanding by Georgia M. Green 1996 ISBN   0-8058-2166-X
  32. Wong, Yuk Wah, and Raymond J. Mooney. "Learning for semantic parsing with statistical machine translation." Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics, 2006.
  33. Natural Language Processing Prolog Programmers by M. Covington, 1994 ISBN   0-13-629478-2
  34. Natural language processing in Prolog by Gerald Gazdar, Christopher S. Mellish 1989 ISBN   0-201-18053-7
  35. Understanding language understanding by Ashwin Ram, Kenneth Moorman 1999 ISBN   0-262-18192-4 page 111
  36. Formal aspects of context by Pierre Bonzon et al 2000 ISBN   0-7923-6350-7
  37. Programming with Natural Language Is Actually Going to Work—Wolfram Blog
  38. Van Valin, Jr, Robert D. "From NLP to NLU" (PDF).
  39. Ball, John. "multi-lingual NLU by Pat Inc". Pat.ai.

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Natural language processing (NLP) is an interdisciplinary subfield of computer science and linguistics. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

<span class="mw-page-title-main">Terry Winograd</span> American professor

Terry Allen Winograd is an American professor of computer science at Stanford University, and co-director of the Stanford Human–Computer Interaction Group. He is known within the philosophy of mind and artificial intelligence fields for his work on natural language using the SHRDLU program.

In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.

Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. The term encompasses both the application of semiotics to computer hardware and software design and, conversely, the use of computation for performing semiotic analysis. The former focuses on what semiotics can bring to computation; the latter on what computation can bring to semiotics.

Roger Carl Schank was an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur. Beginning in the late 1960s, he pioneered conceptual dependency theory and case-based reasoning, both of which challenged cognitivist views of memory and reasoning. He began his career teaching at Yale University and Stanford University. In 1989, Schank was granted $30 million in a ten-year commitment to his research and development by Andersen Consulting, through which he founded the Institute for the Learning Sciences (ILS) at Northwestern University in Chicago.

Frame semantics is a theory of linguistic meaning developed by Charles J. Fillmore that extends his earlier case grammar. It relates linguistic semantics to encyclopedic knowledge. The basic idea is that one cannot understand the meaning of a single word without access to all the essential knowledge that relates to that word. For example, one would not be able to understand the word "sell" without knowing anything about the situation of commercial transfer, which also involves, among other things, a seller, a buyer, goods, money, the relation between the money and the goods, the relations between the seller and the goods and the money, the relation between the buyer and the goods and the money and so on. Thus, a word activates, or evokes, a frame of semantic knowledge relating to the specific concept to which it refers.

Jerry R. Hobbs is an American researcher in the fields of computational linguistics, discourse analysis, and artificial intelligence.

Computational semantics is the study of how to automate the process of constructing and reasoning with meaning representations of natural language expressions. It consequently plays an important role in natural-language processing and computational linguistics.

In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

Natural-language user interface is a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications.

Margaret Masterman was a British linguist and philosopher, most known for her pioneering work in the field of computational linguistics and especially machine translation. She founded the Cambridge Language Research Unit.

Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems.

The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.

William Aaron Woods, generally known as Bill Woods, is a researcher in natural language processing, continuous speech understanding, knowledge representation, and knowledge-based search technology. He is currently a Software Engineer at Google.

<span class="mw-page-title-main">Text, Speech and Dialogue</span>

Text, Speech and Dialogue (TSD) is an annual conference involving topics on natural language processing and computational linguistics. The meeting is held every September alternating in Brno and Plzeň, Czech Republic.

The following outline is provided as an overview of and topical guide to natural-language processing:

Gregory Grefenstette is a French–American researcher and professor in computer science, in particular artificial intelligence and natural language processing. As of 2020, he is the chief scientific officer at Biggerpan, a company developing a predictive contextual engine for the mobile web. Grefenstette is also a senior associate researcher at the Florida Institute for Human and Machine Cognition (IHMC).

<span class="mw-page-title-main">Semantic parsing</span>

Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applications of semantic parsing include machine translation, question answering, ontology induction, automated reasoning, and code generation. The phrase was first used in the 1970s by Yorick Wilks as the basis for machine translation programs working with only semantic representations. Semantic parsing is one of the important tasks in computational linguistics and natural language processing.