Chris Welty

Last updated
Chris Welty
Chris Welty in Innsbruck, 2007.jpg
Born
Chris A. Welty
NationalityAmerican
Alma mater Rensselaer Polytechnic Institute
Scientific career
FieldsComputer Science
Institutions Vassar College

Christopher A. Welty is an American computer scientist, who works at Google Research in New York. He is best known for his work on ontologies, in the Semantic Web, and on IBM's Watson. While on sabbatical from Vassar College from 1999 to 2000, he collaborated with Nicola Guarino on OntoClean; [1] he was co-chair of the W3C Rule Interchange Format working group from 2005 to 2009. [2]

Contents

Background and education

Welty is a graduate of Rensselaer Polytechnic Institute, (RPI) where he worked for the Free Software Foundation on version 16-18 of GNU Emacs as well as the formation of NYSERNet during the emergence of the Internet. This synergy of interests made him an early public figure in AI, as he moderated the "NL-KR Digest" and the corresponding comp.ai.nlang-know-rep newsgroup (now defunct), which was at the time the widest vehicle for dissemination of announcements and moderated discussion in the natural language and knowledge representation communities. He later became the editor in chief of intelligence Magazine (sic), published by ACM. [3] This magazine was published in place of the SIGART Bulletin from 1999 to 2001.

Welty began to make his first scientific contributions in the early 1990s, when he emerged as a leading figure in the Automated Software Engineering community, whose on-line bibliography lists his 1995 paper as one of the best papers that year [4] (this would be the year he finished his PhD), becoming in each successive year the program chair, general chair, and steering committee chair of that conference.

His PhD work [5] focused on extending the work of Prem Devanbu at AT&T on Lassie [6] with a better developed ontology. After his PhD, he moved to Vassar College, where his work shifted away from Software Engineering and towards ontology. In 1998, he published seminal work on the analysis of subjects in library information systems, dispelling the widely held myth at the time (which is now resurfacing) that subject taxonomies are ontologies. [7]

OntoClean

During 1999-2000, while on sabbatical from Vassar College in Padova, Italy, he formed a productive collaboration with Nicola Guarino to develop OntoClean, [1] a notable and widely recognized contribution in Artificial Intelligence, specifically Ontologies. According to Thompson-ISI, work on OntoClean was the most cited of academic papers on Ontology. [8] OntoClean was important as it was the first formal methodology for ontology engineering, applying scientific principles to a field whose practice was mostly art.

Semantic Web

Although an active participant in the Semantic Web movement from the start, it was only after he moved to IBM Research, that he formally joined the W3C Web Ontology Language working group, as a co-editor of the OWL Guide. [9]

From 2004-2005, at the end of the OWL WG, Welty led the Ontology Engineering and Patterns efforts in the Semantic Web Best Practices WG, helping to edit several important notes on using OWL, as well as the first W3C ontology for part-whole relations and time.

From 2005-2009, he was co-chair of the Rule Interchange Format (RIF) working group.

In 2007, he gave a keynote talk at the 6th International Semantic Web Conference in Busan, Korea.

Watson

Welty was one of the developers of Watson, the IBM computer that defeated the best players on the American game show Jeopardy!. He is identified as a member of the "Core Algorithms Team" [10] and has said he is one of the 12 original members of the Watson team . [11] He appeared on the televised broadcasts of the show several times, commenting on the scientific aspects of the challenge and accomplishment, and was interviewed in numerous news broadcasts and publications. [12] [13] He hosted the "viewing party" at RPI's EMPAC on all three nights the show was aired (14–16 February). [14] He gave the keynote talk on Watson at the Trentino region launching of the Semantic Valley initiative. [15]

Related Research Articles

<span class="mw-page-title-main">Semantic Web</span> Extension of the Web to facilitate data exchange

The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.

The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.

<span class="mw-page-title-main">Deborah McGuinness</span>

Deborah Louise McGuinness is an American computer scientist and researcher at Rensselaer Polytechnic Institute (RPI). She is a professor of Computer, Cognitive and Web Sciences, Industrial and Systems Engineering, and an endowed chair in the Tetherless World Constellation, a multidisciplinary research institution within RPI that focuses on the study of theories, methods and applications of the World Wide Web. Her fields of expertise include interdisciplinary data integration, artificial intelligence, specifically in knowledge representation and reasoning, description logics, the semantic web, explanation, and trust.

<span class="mw-page-title-main">James Hendler</span> AI researcher

James Alexander Hendler is an artificial intelligence researcher at Rensselaer Polytechnic Institute, United States, and one of the originators of the Semantic Web. He is a Fellow of the National Academy of Public Administration.

<span class="mw-page-title-main">Semantic technology</span> Technology to help machines understand data

The ultimate goal of semantic technology is to help machines understand data. To enable the encoding of semantics with the data, well-known technologies are RDF and OWL. These technologies formally represent the meaning involved in information. For example, ontology can describe concepts, relationships between things, and categories of things. These embedded semantics with the data offer significant advantages such as reasoning over data and dealing with heterogeneous data sources.

The Semantic Web Rule Language (SWRL) is a proposed language for the Semantic Web that can be used to express rules as well as logic, combining OWL DL or OWL Lite with a subset of the Rule Markup Language.

<span class="mw-page-title-main">Ian Horrocks</span> British academic (b.1958)

Ian Robert Horrocks is a professor of computer science at the University of Oxford in the UK and a Fellow of Oriel College, Oxford. His research focuses on knowledge representation and reasoning, particularly ontology languages, description logic and optimised tableaux decision procedures.

The Rule Interchange Format (RIF) is a W3C Recommendation. RIF is part of the infrastructure for the semantic web, along with (principally) SPARQL, RDF and OWL. Although originally envisioned by many as a "rules layer" for the semantic web, in reality the design of RIF is based on the observation that there are many "rules languages" in existence, and what is needed is to exchange rules between them.

OntoClean is a methodology for analyzing ontologies based on formal, domain-independent properties of classes developed by Nicola Guarino and Chris Welty.

Nicola Guarino is an Italian computer scientist and researcher in the area of Formal Ontology for Information Systems, and the head of the Laboratory for Applied Ontology (LOA), part of the Italian National Research Council (CNR) in Trento.

<span class="mw-page-title-main">DBpedia</span> Online database project

DBpedia is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets.

Machine interpretation of documents and services in Semantic Web environment is primarily enabled by (a) the capability to mark documents, document segments and services with semantic tags and (b) the ability to establish contextual relations between the tags with a domain model, which is formally represented as ontology. Human beings use natural languages to communicate an abstract view of the world. Natural language constructs are symbolic representations of human experience and are close to the conceptual model that Semantic Web technologies deal with. Thus, natural language constructs have been naturally used to represent the ontology elements. This makes it convenient to apply Semantic Web technologies in the domain of textual information. In contrast, multimedia documents are perceptual recording of human experience. An attempt to use a conceptual model to interpret the perceptual records gets severely impaired by the semantic gap that exists between the perceptual media features and the conceptual world. Notably, the concepts have their roots in perceptual experience of human beings and the apparent disconnect between the conceptual and the perceptual world is rather artificial. The key to semantic processing of multimedia data lies in harmonizing the seemingly isolated conceptual and the perceptual worlds. Representation of the Domain knowledge needs to be extended to enable perceptual modeling, over and above conceptual modeling that is supported. The perceptual model of a domain primarily comprises observable media properties of the concepts. Such perceptual models are useful for semantic interpretation of media documents, just as the conceptual models help in the semantic interpretation of textual documents.

A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including non-axiomatic reasoning systems, and probabilistic logic networks.

Amit Sheth is a computer scientist at University of South Carolina in Columbia, South Carolina. He is the founding Director of the Artificial Intelligence Institute, and a Professor of Computer Science and Engineering. From 2007 to June 2019, he was the Lexis Nexis Ohio Eminent Scholar, director of the Ohio Center of Excellence in Knowledge-enabled Computing, and a Professor of Computer Science at Wright State University. Sheth's work has been cited by over 48,800 publications. He has an h-index of 106, which puts him among the top 100 computer scientists with the highest h-index. Prior to founding the Kno.e.sis Center, he served as the director of the Large Scale Distributed Information Systems Lab at the University of Georgia in Athens, Georgia.

<span class="mw-page-title-main">Ontology engineering</span> Field that studies the methods and methodologies for building ontologies

In computer science, information science and systems engineering, ontology engineering is a field which studies the methods and methodologies for building ontologies, which encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities of a given domain of interest. In a broader sense, this field also includes a knowledge construction of the domain using formal ontology representations such as OWL/RDF. A large-scale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering. Ontology engineering is one of the areas of applied ontology, and can be seen as an application of philosophical ontology. Core ideas and objectives of ontology engineering are also central in conceptual modeling.

<span class="mw-page-title-main">YAGO (database)</span> Open-source information repository

YAGO is an open source knowledge base developed at the Max Planck Institute for Informatics in Saarbrücken. It is automatically extracted from Wikipedia and other sources.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

Giancarlo Guizzardi is a Brazilian–Italian computer scientist specializing in conceptual modeling, enterprise modeling, applied ontology and ontology-driven information systems. He is a professor in the University of Twente in Italy and a senior researcher and founding member of the Ontology & Conceptual Modeling Research Group (NEMO) in Vitoria, Brazil.

Terry R. Payne is a computer scientist and artificial intelligence researcher at the University of Liverpool. He works on the use of ontologies by Software Agents within decentralised environments. He is best known for his work on Semantic Web Services and in particular for his work on OWL-S.

References

  1. 1 2 Guarino, N.; Welty, C. (2002). "Evaluating ontological decisions with OntoClean". Communications of the ACM. 45 (2): 61–65. CiteSeerX   10.1.1.11.5832 . doi:10.1145/503124.503150. S2CID   12776184.
  2. "Chris Welty". Google Scholar.
  3. Anne P. Wilson. "ACM Announces intelligence Magazine". Archived from the original on 2004-06-02. Retrieved 2007-04-12.
  4. "ASE Conferences Best Papers". Archived from the original on 2011-09-30. Retrieved 2007-04-12.
  5. Chris Welty. "An Integrated Representation for Software Development and Discovery". Archived from the original on 28 April 2007.
  6. Prem Devanbu; Ron Brachman; Peter Selfridge (1991). "LaSSIE: a knowledge-based software information system". Communications of the ACM. 34 (5): 34–49. doi:10.1145/103167.103172. S2CID   60829337.
  7. Chris Welty; Jessica Jenkins. "An ontology for Subject" (PDF). Archived from the original (PDF) on 25 March 2007.
  8. Thompson. "Emerging Research Fronts:Ontologies".
  9. Michael K. Smith; Chris Welty; Deborah McGuinness. "OWL Web Ontology Language Guide".
  10. "IBM Watson Homepage".
  11. Chris Welty. "RCOS Spring 2011 - Chris Welty talks about IBM Watson at RPI". YouTube . Archived from the original on 2021-12-15.
  12. Casey Johnston (16 February 2011). "Creators: Watson has no speed advantage as it crushes humans in Jeopardy".
  13. "Watson vs. Jeopardy! Champs & the Trek Connection". 14 February 2011.
  14. "Jeopardy! The IBM Challenge at RPI". Archived from the original on 2011-02-12. Retrieved 2011-05-08.
  15. "Semantic Valley". Archived from the original on 2020-09-18. Retrieved 2011-05-08.