Knowledge representation and reasoning

Last updated

Knowledge representation and reasoning (KR, KR², KR&R) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology [1] about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. [2]

In the field of computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

Computer-aided diagnosis

Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.

Logic study of inference and demonstration

Logic, is the systematic study of the form of valid inference, and the most general laws of truth. A valid inference is one where there is a specific relation of logical support between the assumptions of the inference and its conclusion. In ordinary discourse, inferences may be signified by words such as therefore, hence, ergo, and so on.

Contents

Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.

Semantic network knowledge representation scheme that uses a directed graph to encode knowledge

A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields.

A system architecture or systems architecture is the conceptual model that defines the structure, behavior, and more views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system.

Frames were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge." A frame is an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." Frames are the primary data structure used in artificial intelligence frame language.

The KR conference series was established to share ideas and progress on this challenging field. [3]

History

The earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon in 1959. These systems featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal.

General Problem Solver or G.P.S. is a computer program created in 1959 by Herbert A. Simon, J. C. Shaw, and Allen Newell intended to work as a universal problem solver machine. Any problem that can be expressed as a set of well-formed formulas (WFFs) or Horn clauses, and that constitute a directed graph with one or more sources and sinks, can be solved, in principle, by GPS. Proofs in the predicate logic and Euclidean geometry problem spaces are prime examples of the domain the applicability of GPS. It was based on Simon and Newell's theoretical work on logic machines. GPS was the first computer program which separated its knowledge of problems from its strategy of how to solve problems. GPS was implemented in the third-order programming language, IPL.

Allen Newell was a researcher in computer science and cognitive psychology at the RAND Corporation and at Carnegie Mellon University’s School of Computer Science, Tepper School of Business, and Department of Psychology. He contributed to the Information Processing Language (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General Problem Solver (1957). He was awarded the ACM's A.M. Turing Award along with Herbert A. Simon in 1975 for their basic contributions to artificial intelligence and the psychology of human cognition.

Herbert A. Simon American political scientist, economist, sociologist, and psychologist

Herbert Alexander Simon was an American economist and political scientist whose primary interest was decision-making within organizations and is best known for the theories of "bounded rationality" and "satisficing". He received the Nobel Prize in Economics in 1978 and the Turing Award in 1975. His research was noted for its interdisciplinary nature and spanned across the fields of cognitive science, computer science, public administration, management, and political science. He was at Carnegie Mellon University for most of his career, from 1949 to 2001.

In these early days of AI, general search algorithms such as A* were also developed. However, the amorphous problem definitions for systems such as GPS meant that they worked only for very constrained toy domains (e.g. the "blocks world"). In order to tackle non-toy problems, AI researchers such as Ed Feigenbaum and Frederick Hayes-Roth realized that it was necessary to focus systems on more constrained problems.

The blocks world is one of the most famous planning domains in artificial intelligence. Imagine a set of wooden blocks of various shapes and colors sitting on a table. The goal is to build one or more vertical stacks of blocks. The catch is that only one block may be moved at a time: it may either be placed on the table or placed atop another block. Because of this, any blocks that are, at a given time, under another block cannot be moved. Moreover, some kinds of blocks cannot have other blocks stacked on top of them.

Rick Hayes-Roth Professor, Truth Seal Corp. Founder, Entrepreneur, AI Fellow

Frederick Hayes-Roth is an American computer scientist and educator. His principal work focuses on how to use computing processes to winnow data down to only those information items that are valuable to the receiver, using technology to deliver those items, and in designing IT systems structured for this task.

It was the failure of these efforts that led to the cognitive revolution in psychology and to the phase of AI focused on knowledge representation that resulted in expert systems in the 1970s and 80s, production systems, frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis.

The cognitive revolution was an intellectual movement that began in the 1950s as an interdisciplinary study of the mind and its processes, which became known collectively as cognitive science. The relevant areas of interchange were between the fields of psychology, anthropology, and linguistics using approaches developed within the then-nascent fields of artificial intelligence, computer science, and neuroscience. A key goal of early cognitive psychology was to apply the scientific method to the study of human cognition by designing experiments that used computational models of artificial intelligence to systematically test theories about human mental processes in a controlled laboratory setting.

A production system is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection.

A frame language is a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.

Expert systems gave us the terminology still in use today where AI systems are divided into a Knowledge Base with facts about the world and rules and an inference engine that applies the rules to the knowledge base in order to answer questions and solve problems. In these early systems the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules. [4]

In addition to expert systems, other researchers developed the concept of frame based languages in the mid 1980s. A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. understanding natural language and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations.

It wasn't long before the frame communities and the rule-based researchers realized that there was synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined Frames and Rules. One of the most powerful and well known was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a complete rule engine with forward and backward chaining. It also had a complete frame based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas Instruments. [5]

The integration of Frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time as this was occurring, there was another strain of research which was less commercially focused and was driven by mathematical logic and automated theorem proving. One of the most influential languages in this research was the KL-ONE language of the mid 80's. KL-ONE was a frame language that had a rigorous semantics, formal definitions for concepts such as an Is-A relation. [6] KL-ONE and languages that were influenced by it such as Loom had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology). [7]

Another area of knowledge representation research was the problem of common sense reasoning. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent. Basic principles of common sense physics, causality, intentions, etc. An example is the frame problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can converse with humans using natural language and can process basic statements and questions about the world, it is essential to represent this kind of knowledge. One of the most ambitious programs to tackle this problem was Doug Lenat's Cyc project. Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others. [8]

The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985: [9]

Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge.

Currently one of the most active areas of knowledge representation research are projects associated with the semantic web. The semantic web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the semantic web creates large ontologies of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future semantic web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet.

Recent projects funded primarily by the Defense Advanced Research Projects Agency (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The Web Ontology Language (OWL) provides additional levels of semantics and enables integration with classification engines. [10] [11]

Overview

Knowledge-representation is field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems.

The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems.

For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical.

Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system. [12]

A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages which do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them.

Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency. [13]

In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework: [14]

Knowledge representation and reasoning are a key enabling technology for the Semantic web. Languages based on the Frame model with automatic classification provide a layer of semantics on top of the existing Internet. Rather than searching via text strings as is typical today, it will be possible to define logical queries and find pages that map to those queries. [15] The automated reasoning component in these systems is an engine known as the classifier. Classifiers focus on the subsumption relations in a knowledge base rather than rules. A classifier can infer new classes and dynamically change the ontology as new information becomes available. This capability is ideal for the ever-changing and evolving information space of the Internet. [16]

The Semantic web integrates concepts from knowledge representation and reasoning with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capabilities to define knowledge-based objects on the Internet with basic features such as Is-A relations and object properties. The Web Ontology Language (OWL) adds additional semantics and integrates with automatic classification reasoners. [17]

Characteristics

In 1985, Ron Brachman categorized the core issues for knowledge representation as follows: [18]

Ontology engineering

In the early years of knowledge-based systems the knowledge-bases were fairly small. The knowledge-bases that were meant to actually solve real problems rather than do proof of concept demonstrations needed to focus on well defined problems. So for example, not just medical diagnosis as a whole topic, but medical diagnosis of certain kinds of diseases.

As knowledge-based technology scaled up, the need for larger knowledge bases and for modular knowledge bases that could communicate and integrate with each other became apparent. This gave rise to the discipline of ontology engineering, designing and building large knowledge bases that could be used by multiple projects. One of the leading research projects in this area was the Cyc project. Cyc was an attempt to build a huge encyclopedic knowledge base that would contain not just expert knowledge but common sense knowledge. In designing an artificial intelligence agent, it was soon realized that representing common sense knowledge, knowledge that humans simply take for granted, was essential to make an AI that could interact with humans using natural language. Cyc was meant to address this problem. The language they defined was known as CycL.

After CycL, a number of ontology languages have been developed. Most are declarative languages, and are either frame languages, or are based on first-order logic. Modularity—the ability to define boundaries around specific domains and problem spaces—is essential for these languages because as stated by Tom Gruber, "Every ontology is a treaty- a social agreement among people with common motive in sharing." There are always many competing and differing views that make any general purpose ontology impossible. A general purpose ontology would have to be applicable in any domain and different areas of knowledge need to be unified. [22]

There is a long history of work attempting to build ontologies for a variety of task domains, e.g., an ontology for liquids, [23] the lumped element model widely used in representing electronic circuits (e.g., [24] ), as well as ontologies for time, belief, and even programming itself. Each of these offers a way to see some part of the world.

The lumped element model, for instance, suggests that we think of circuits in terms of components with connections between them, with signals flowing instantaneously along the connections. This is a useful view, but not the only possible one. A different ontology arises if we need to attend to the electrodynamics in the device: Here signals propagate at finite speed and an object (like a resistor) that was previously viewed as a single component with an I/O behavior may now have to be thought of as an extended medium through which an electromagnetic wave flows.

Ontologies can of course be written down in a wide variety of languages and notations (e.g., logic, LISP, etc.); the essential information is not the form of that language but the content, i.e., the set of concepts offered as a way of thinking about the world. Simply put, the important part is notions like connections and components, not the choice between writing them as predicates or LISP constructs.

The commitment made selecting one or another ontology can produce a sharply different view of the task at hand. Consider the difference that arises in selecting the lumped element view of a circuit rather than the electrodynamic view of the same device. As a second example, medical diagnosis viewed in terms of rules (e.g., MYCIN) looks substantially different from the same task viewed in terms of frames (e.g., INTERNIST). Where MYCIN sees the medical world as made up of empirical associations connecting symptom to disease, INTERNIST sees a set of prototypes, in particular prototypical diseases, to be matched against the case at hand.

See also

Related Research Articles

Cyc is the world's longest-lived artificial intelligence project, attempting to assemble a comprehensive ontology and knowledge base that spans the basic concepts and "rules of thumb" about how the world works, with the goal of enabling AI applications to perform human-like reasoning and be less "brittle" when confronted with novel situations that were not preconceived.

Expert system computer system that emulates the decision-making ability of a human expert

In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

KL-ONE is a well known knowledge representation system in the tradition of semantic networks and frames; that is, it is a frame language. The system is an attempt to overcome semantic indistinctness in semantic network representations and to explicitly represent conceptual information as a structured inheritance network.

In computer science and information science, an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains.

Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors.

The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects. Ontologies resemble class hierarchies in object-oriented programming but there are several critical differences. Class hierarchies are meant to represent structures used in source code that evolve fairly slowly whereas ontologies are meant to represent information on the Internet and are expected to be evolving almost constantly. Similarly, ontologies are typically far more flexible as they are meant to represent information on the Internet coming from all sorts of heterogeneous data sources. Class hierarchies on the other hand are meant to be fairly static and rely on far less diverse and more structured sources of data such as corporate databases.

Ronald Jay "Ron" Brachman is the director of the Jacobs Technion-Cornell Institute at Cornell Tech. Previously, he was the Chief Scientist of Yahoo! and head of Yahoo! Labs. Prior to that, he was the Associate Head of Yahoo! Labs and Head of Worldwide Labs and Research Operations.

Logic in computer science academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base, and an inference engine.

Loom is a knowledge representation language developed by researchers in the artificial intelligence research group at the University of Southern California's Information Sciences Institute. The leader of the Loom project and primary architect for Loom was Robert MacGregor. The research was primarily sponsored by the Defense Advanced Research Projects Agency (DARPA).

Deborah Louise McGuinness is an American computer scientist and Professor at Rensselaer Polytechnic Institute where she holds an endowed chair in the Tetherless World Research Constellation. She is working in the field of artificial intelligence, specifically in knowledge representation and reasoning, description logics, the semantic web, explanation, and trust.

Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for KBE is on the use of knowledge-based technology to support computer-aided design (CAD) however knowledge-based techniques can be applied to the entire product lifecycle.

In computer science and artificial intelligence, ontology languages are formal languages used to construct ontologies. They allow the encoding of knowledge about specific domains and often include reasoning rules that support the processing of that knowledge. Ontology languages are usually declarative languages, are almost always generalizations of frame languages, and are commonly based on either first-order logic or on description logic.

F-logic is a knowledge representation and ontology language. F-logic combines the advantages of conceptual modeling with object-oriented, frame-based languages and offers a declarative, compact and simple syntax, as well as the well-defined semantics of a logic-based language.

A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including Pei Wang's non-axiomatic reasoning system, and probabilistic logic networks.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

Flora-2 is an open source semantic rule-based system for knowledge representation and reasoning. The language of the system is derived from F-logic, HiLog, and Transaction logic. Being based on F-logic and HiLog implies that object-oriented syntax and higher-order representation are the major features of the system. Flora-2 also supports a form of defeasible reasoning called Logic Programming with Defaults and Argumentation Theories (LPDA). Applications include intelligent agents, Semantic Web, knowledge-bases networking, ontology management, integration of information, security policy analysis, automated database normalization, and more.

A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. Compared to rule-based inference engines, which can only apply triggers like OFF or IF-THEN when a condition is not met, these classifiers seek to mimic human deductive logic.

Rulelog is an expressive semantic rule-based knowledge representation and reasoning (KRR) language. It underlies knowledge representation languages used in systems such as Flora-2, SILK and others. It extends well-founded declarative logic programs with features for higher-order syntax, frame syntax, defeasibility, general quantified expressions both in the bodies of the rules and their heads, user-defined functions, and restraint bounded rationality.

References

  1. Roger Schank; Robert Abelson (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Lawrence Erlbaum Associates, Inc.
  2. "Knowledge Representation in Neural Networks - deepMinds". deepMinds. 2018-08-16. Retrieved 2018-08-16.
  3. "Principles of Knowledge Representation and Reasoning". kr.org. KR Inc. Retrieved 22 November 2017.
  4. Hayes-Roth, Frederick; Donald Waterman; Douglas Lenat (1983). Building Expert Systems. Addison-Wesley. ISBN   978-0-201-10686-2.
  5. Mettrey, William (1987). "An Assessment of Tools for Building Large Knowledge-Based Systems". AI Magazine. 8 (4). Archived from the original on 2013-11-10. Retrieved 2013-12-24.
  6. Brachman, Ron (1978). "A Structural Paradigm for Representing Knowledge" (PDF). Bolt, Beranek, and Neumann Technical Report (3605).
  7. MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge representation". IEEE Expert. 6 (3): 41–46. doi:10.1109/64.87683 . Retrieved 10 November 2013.
  8. Lenat, Doug; R. V. Guha (January 1990). Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project. Addison-Wesley. ISBN   978-0201517521.
  9. Smith, Brian C. (1985). "Prologue to Reflections and Semantics in a Procedural Language". In Ronald Brachman and Hector J. Levesque. Readings in Knowledge Representation. Morgan Kaufmann. pp. 31–40. ISBN   978-0-934613-01-9.
  10. Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific American. 284 (5): 34–43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013.
  11. Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Wallace, Evan (2006-03-09). "A Semantic Web Primer for Object-Oriented Software Developers". W3C . Retrieved 2008-07-30.
  12. Hayes-Roth, Frederick; Donald Waterman; Douglas Lenat (1983). Building Expert Systems. Addison-Wesley. pp. 6–7. ISBN   978-0-201-10686-2.
  13. Levesque, Hector; Ronald Brachman (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque. Readings in Knowledge Representation. Morgan Kaufmann. p. 49. ISBN   978-0-934613-01-9. The good news in reducing KR service to theorem proving is that we now have a very clear, very specific notion of what the KR system should do; the bad new is that it is also clear that the services can not be provided... deciding whether or not a sentence in FOL is a theorem... is unsolvable.
  14. Davis, Randall; Howard Shrobe; Peter Szolovits (Spring 1993). "What Is a Knowledge Representation?". AI Magazine. 14 (1): 17–33.
  15. Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific American. 284 (5): 34–43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013.
  16. Macgregor, Robert (August 13, 1999). "Retrospective on Loom". isi.edu. Information Sciences Institute. Archived from the original on 25 October 2013. Retrieved 10 December 2013.
  17. Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Wallace, Evan (2006-03-09). "A Semantic Web Primer for Object-Oriented Software Developers". W3C . Retrieved 2008-07-30.
  18. Brachman, Ron (1985). "Introduction". In Ronald Brachman and Hector J. Levesque. Readings in Knowledge Representation. Morgan Kaufmann. pp. XVI–XVII. ISBN   978-0-934613-01-9.
  19. Bih, Joseph (2006). "Paradigm Shift: An Introduction to Fuzzy Logic" (PDF). IEEE Potentials. Retrieved 24 December 2013.
  20. Zlatarva, Nellie (1992). "Truth Maintenance Systems and their Application for Verifying Expert System Knowledge Bases". Artificial Intelligence Review. 6: 67–110. doi:10.1007/bf00155580.
  21. Levesque, Hector; Ronald Brachman (1985). "A Fundamental Tradeoff in Knowledge Representation and Reasoning". In Ronald Brachman and Hector J. Levesque. Readings in Knowledge Representation. Morgan Kaufmann. pp. 41–70. ISBN   978-0-934613-01-9.
  22. Russell, Stuart J.; Norvig, Peter (2010), Artificial Intelligence: A Modern Approach (3rd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN   0-13-604259-7, p. 437-439
  23. Hayes P, Naive physics I: Ontology for liquids. University of Essex report, 1978, Essex, UK.
  24. Davis R, Shrobe H E, Representing Structure and Behavior of Digital Hardware, IEEE Computer, Special Issue on Knowledge Representation, 16(10):75-82.

Further reading