Jerry Hobbs

Last updated
Jerry Hobbs
Born (1942-01-25) January 25, 1942 (age 81)
NationalityAmerican
Alma mater New York University
Awards Association for Computational Linguistics Lifetime Achievement Award (2013)
Scientific career
Fields Computer sciences
Institutions SRI International Artificial Intelligence Center
Doctoral advisor Jacob T. Schwartz
Naomi Sager

Jerry R. Hobbs (born January 25, 1942) is an American researcher in the fields of computational linguistics, discourse analysis, and artificial intelligence.

Contents

Education

Hobbs earned his doctor's degree from New York University in 1974 in computer science and has taught at Yale University and the City University of New York.[ citation needed ]

Career

From 1977 to 2002 he was with the Artificial Intelligence Center at SRI International, Menlo Park, California, where he was a principal scientist and program director of the Natural Language Program. He has written numerous papers in the areas of parsing, syntax, semantic interpretation, information extraction, knowledge representation, encoding commonsense knowledge, discourse analysis, the structure of conversation, and the Semantic Web. [1]

He is the author of the book Literature and Cognition, and was also editor of the book Formal Theories of the Commonsense World. He led SRI's text-understanding research, and directed the development of the abduction-based TACITUS system for text understanding, and the FASTUS system for rapid extraction of information from text based on finite-state automata. The latter system constituted the basis for an SRI spinoff, Discern Communications. In September 2002 he took a position as senior computer scientist and research professor at the Information Sciences Institute, University of Southern California. He has been a consulting professor with the Linguistics Department and the Symbolic Systems Program at Stanford University.

He has served as general editor of the Ablex Series on Artificial Intelligence. He is a past president of the Association for Computational Linguistics, and is a Fellow of the American Association for Artificial Intelligence. In January 2003 he was awarded an honorary Doctorate of Philosophy from the University of Uppsala, Sweden. In August 2013 he received the Association for Computational Linguistics Lifetime Achievement Award. [2]

Works

  1. Literature and Cognition (Lecture Notes, Center for the Study of Language and Information, Jul 9, 1990)
  2. Local Pragmatics (Technical note, SRI International, 1987)
  3. Commonsense Metaphysics and Lexical Semantics (Technical note, SRI International, 1986)
  4. An Algorithm for Generating Quantifier Scopings (Report, Center for the Study of Language and Information, 1986)
  5. Formal Theories of the Commonsense World (Ablex Series in Artificial Intelligence, Vol. 1, Jun 1985, with Robert C. Moore)
  6. On the Coherence and Structure of Discourse (Report, 1985)
  7. The Coherence of Incoherent Discourse (Report, 1985)
  8. Making Computational Sense of Montague's Intensional Logic (Courant computer science report, 1976)
  9. A Metalanguage for Expressing Grammatical Restrictions in Nodal Spans Parsing of Natural Language (Courant computer science report, 1974)

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

<span class="mw-page-title-main">Natural language processing</span> Field of linguistics and computer science

Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.

The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language. It was developed by Steven Bird and Edward Loper in the Department of Computer and Information Science at the University of Pennsylvania. NLTK includes graphical demonstrations and sample data. It is accompanied by a book that explains the underlying concepts behind the language processing tasks supported by the toolkit, plus a cookbook.

Frame semantics is a theory of linguistic meaning developed by Charles J. Fillmore that extends his earlier case grammar. It relates linguistic semantics to encyclopedic knowledge. The basic idea is that one cannot understand the meaning of a single word without access to all the essential knowledge that relates to that word. For example, one would not be able to understand the word "sell" without knowing anything about the situation of commercial transfer, which also involves, among other things, a seller, a buyer, goods, money, the relation between the money and the goods, the relations between the seller and the goods and the money, the relation between the buyer and the goods and the money and so on. Thus, a word activates, or evokes, a frame of semantic knowledge relating to the specific concept to which it refers.

<span class="mw-page-title-main">Ontology learning</span> Automatic creation of ontologies

Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.

<span class="mw-page-title-main">Yorick Wilks</span> British computer scientist (born 1939)

Yorick Wilks FBCS, a British computer scientist, is emeritus professor of artificial intelligence at the University of Sheffield, visiting professor of artificial intelligence at Gresham College, Former senior research fellow at the Oxford Internet Institute, senior scientist at the Florida Institute for Human and Machine Cognition, and a member of the Epiphany Philosophers. As of February 2023, Wilks joined WiredVibe as Director of Artificial Intelligence and Board Member to help commercialise his previous ideas and research.

Ablex Publishing Corporation is a privately held publisher of books and academic journals in New York City, New York, USA. It was previously located in Norwood, New Jersey, and also at one time in Westport and Stamford, Connecticut. Ablex publishes edited volumes, monographs, research journals, and textbooks, focused on communication, education, library science, psychology, and technology. In 1997, Ablex became an affiliate company of JAI Press, a subsidiary of Elsevier Science, the world's largest publisher of medical and scientific literature.

A discourse relation is a description of how two segments of discourse are logically and/or structurally connected to one another.

Jun'ichi Tsujii is a Japanese computer scientist specializing in natural language processing and text mining, particularly in the field of biology and bioinformatics.

William Aaron Woods, generally known as Bill Woods, is a researcher in natural language processing, continuous speech understanding, knowledge representation, and knowledge-based search technology. He is currently a Software Engineer at Google.

<span class="mw-page-title-main">Text, Speech and Dialogue</span>

Text, Speech and Dialogue (TSD) is an annual conference involving topics on natural language processing and computational linguistics. The meeting is held every September alternating in Brno and Plzeň, Czech Republic.

The following outline is provided as an overview of and topical guide to natural-language processing:

<span class="mw-page-title-main">Barbara J. Grosz</span> American computer scientist (born 1948)

Barbara J. Grosz CorrFRSE is an American computer scientist and Higgins Professor of Natural Sciences at Harvard University. She has made seminal contributions to the fields of natural language processing and multi-agent systems. With Alison Simmons, she is co-founder of the Embedded EthiCS programme at Harvard, which embeds ethics lessons into computer science courses.

Naomi Sager is an American computational linguistics research scientist. She is a former research professor at New York University, now retired. She is a pioneer in the development of natural language processing for computers.

<span class="mw-page-title-main">Dan Roth</span> Professor of Computer Science at University of Pennslyvania

Dan Roth is the Eduardo D. Glandt Distinguished Professor of Computer and Information Science at the University of Pennsylvania.

Newton Howard is a brain and cognitive scientist, the former director of the MIT Mind Machine Project at the Massachusetts Institute of Technology (MIT). He is a professor of computational neurology and functional neurosurgery at Georgetown University. He was a professor of at the University of Oxford, where he directed the Oxford Computational Neuroscience Laboratory. He is also the director of MIT's Synthetic Intelligence Lab, the founder of the Center for Advanced Defense Studies and the chairman of the Brain Sciences Foundation. Professor Howard is also a senior fellow at the John Radcliffe Hospital at Oxford, a senior scientist at INSERM in Paris and a P.A.H. at the CHU Hospital in Martinique.

Gregory Grefenstette is a French and American researcher and professor in computer science, in particular artificial intelligence and natural language processing. As of 2020, he is the chief scientific officer at Biggerpan, a company developing a predictive contextual engine for the mobile web. Grefenstette is also a senior associate researcher at the Florida Institute for Human and Machine Cognition (IHMC).

Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applications of semantic parsing include machine translation, question answering, ontology induction, automated reasoning, and code generation. The phrase was first used in the 1970s by Yorick Wilks as the basis for machine translation programs working with only semantic representations.

Ellen Riloff is an American computer scientist currently serving as a professor at the School of Computing at the University of Utah. Her research focuses on Natural Language Processing and Computational Linguistics, specifically information extraction, sentiment analysis, semantic class induction, and bootstrapping methods that learn from unannotated texts.

References

  1. "Dr Jerry R Hobbs". SRI International Artificial Intelligence Center . Retrieved February 5, 2012.
  2. Lau, Kary (August 23, 2013). "Jerry Hobbs receives ACL Lifetime Achievement Award". Information Sciences Institute . USC Viterbi School of Engineering . Retrieved September 8, 2013.
Preceded by ACL Lifetime Achievement Award
2013
Succeeded by