Knowledge collection from volunteer contributors

Last updated

Knowledge collection from volunteer vontributors (KCVC) is a subfield of knowledge acquisition within artificial intelligence which attempts to drive down the cost of acquiring the knowledge required to support automated reasoning by having the public enter knowledge in computer processable form over the internet. KCVC might be regarded as similar in spirit to Wikipedia, although the intended audience, artificial Intelligence systems, differs.

Contents

History

The 2005 AAAI Spring Symposium on Knowledge Collection from Volunteer Contributors (KCVC05) may have been the first research meeting on this topic. [1]

The first large-scale KCVC project was probably the Open Mind Common Sense (OMCS) project, initiated by Push Singh and Marvin Minsky at the MIT Media Lab.[ when? ] In this project, volunteers entered words or simple sentences in English in response to prompts or images. Although the resulting knowledge is not formally represented, it is provided to researchers with parses and other meta-information intended to increase its utility. Later, this group released ConceptNet, which embedded the knowledge contained in the OpenMind Common Sense database as a semantic network.

In late 2005, Cycorp released a KCVC system called FACTory [2] that attempts to acquire knowledge in a form directly usable for automated reasoning. It automatically generates questions in English from an underlying predicate calculus representation of candidate assertions produced by automated reading of web pages, by reviewing information previously entered directly in logical form, and by analogy and abduction.

Related Research Articles

Artificial intelligence (AI) is intelligence - perceiving, synthesizing, and infering information - demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. OED (OUP) defines artificial intelligence as:

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

<span class="mw-page-title-main">Cyc</span>

Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations.

<span class="mw-page-title-main">Douglas Lenat</span> American entrepreneur and researcher in artificial intelligence

Douglas Bruce Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning program, AM. He has worked on machine learning, knowledge representation, "cognitive economy", blackboard systems, and what he dubbed in 1984 "ontological engineering". He has also worked in military simulations, and numerous projects for US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial Intelligence exploring the nature of heuristic rules.

Open Mind Common Sense (OMCS) is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. It has been active from 1999 to 2016.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day. These assumptions include judgments about the nature of physical objects, taxonomic properties, and peoples' intentions. A device that exhibits commonsense reasoning might be capable of drawing conclusions that are similar to humans' folk psychology and naive physics.

In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. It is currently an unsolved problem in Artificial General Intelligence. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.

<span class="mw-page-title-main">Alan Bundy</span> British artificial intelligence researcher (born 1947)

Alan Richard Bundy is a professor at the School of Informatics at the University of Edinburgh, known for his contributions to automated reasoning, especially to proof planning, the use of meta-level reasoning to guide proof search.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Hector Joseph Levesque is a Canadian academic and researcher in artificial intelligence.

The LIDA cognitive architecture is an integrated artificial cognitive system that attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Action model learning</span>

Action model learning is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.

The Winograd schema challenge (WSC) is a test of machine intelligence proposed by Hector Levesque, a computer scientist at the University of Toronto. Designed to be an improvement on the Turing test, it is a multiple-choice test that employs questions of a very specific structure: they are instances of what are called Winograd schemas, named after Terry Winograd, professor of computer science at Stanford University.

GOFAI is an acronym for "Good Old-Fashioned Artificial Intelligence" invented by the philosopher John Haugeland in his 1985 book, Artificial Intelligence: The Very Idea. Technically, GOFAI refers only to a restricted kind of symbolic AI, namely rule-based or logical agents. This approach was popular in the 1980s, especially as an approach to implementing expert systems, but symbolic AI has since been extended in many ways to better handle uncertain reasoning and more open-ended systems. Some of these extensions include probabilistic reasoning, non-monotonic reasoning, multi-agent systems, and neuro-symbolic systems. Significant contributions of symbolic AI, not encompassed by the GOFAI view, include search algorithms; automated planning and scheduling; constraint-based reasoning; the semantic web; ontologies; knowledge graphs; non-monotonic logic; circumscription; automated theorem proving; and symbolic mathematics. For a more complete list, see the main article on symbolic AI.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Sheila Ann McIlraith is a Canadian computer scientist whose research topics include artificial intelligence and the Semantic Web. She is a professor of computer science at the University of Toronto.

Michael Genesereth is an American logician and computer scientist, who is most known for his work on computational logic and applications of that work in enterprise management, computational law, and general game playing. Genesereth is professor in the Computer Science Department at Stanford University and a professor by courtesy in the Stanford Law School. His 1987 textbook on Logical Foundations of Artificial Intelligence remains one of the key references on Symbolic artificial intelligence. He is the author of the influential Game Description Language (GDL) and Knowledge Interchange Format (KIF), the latter of which led to the ISO Common Logic standard.

Neuro-symbolic AI integrates neural and symbolic AI architectures to address complementary strengths and weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Valiant and many others, the effective construction of rich computational cognitive models demands the combination of sound symbolic reasoning and efficient machine learning models. Gary Marcus, argues that: "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning.". Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can manipulate such abstract knowledge reliably is the apparatus of symbol-manipulation."

References