Knowledge level

Last updated

In artificial intelligence, knowledge-based agents draw on a pool of logical sentences to infer conclusions about the world. At the knowledge level, we only need to specify what the agent knows and what its goals are; a logical abstraction separate from details of implementation.

This notion of knowledge level was first introduced by Allen Newell in the 1980s, to have a way to rationalize an agent's behavior. The agent takes actions based on knowledge it possesses, in an attempt to reach specific goals. It chooses actions according to the principle of rationality.

Beneath the knowledge level resides the symbol level. Whereas the knowledge level is world oriented, namely that it concerns the environment in which the agent operates, the symbol level is system oriented, in that it includes the mechanisms the agent has available to operate. The knowledge level rationalizes the agent's behavior, while the symbol level mechanizes the agent's behavior.

For example, in a computer program, the knowledge level consists of the information contained in its data structures that it uses to perform certain actions. The symbol level consists of the program's algorithms, the data structures themselves, and so on.

See also

Related Research Articles

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

Neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s. In the 1990s and 21st century AI research adopted "neat" approaches almost exclusively and these have proven to be the most successful.

The principle of rationality was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper’s rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis.

In knowledge-based systems, agents choose actions based on the principle of rationality to move closer to a desired goal. The agent is able to make decisions based on knowledge it has about the world. But for the agent to actually change its state, it must use whatever means it has available. This level of description for the agent's behavior is the symbol level. The term was coined by Allen Newell in 1982.

Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University. It is now maintained and developed by John Laird's research group at the University of Michigan.

<span class="mw-page-title-main">Logic in computer science</span> Academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

Unified Theories of Cognition is a 1990 book by Allen Newell. Newell argues for the need of a set of general assumptions for cognitive models that account for all of cognition: a unified theory of cognition, or cognitive architecture. The research started by Newell on unified theories of cognition represents a crucial element of divergence with respect to the vision of his long-term collaborator, and AI pioneer, Herbert Simon for what concerns the future of artificial intelligence research. Antonio Lieto recently drew attention to such a discrepancy, by pointing out that Herbert Simon decided to focus on the construction of single simulative programs that were considered a sufficient mean to enable the generalisation of “unifying” theories of cognition. Newell, on the other hand, didn’t consider the construction of single simulative microtheories a sufficient mean to enable the generalisation of “unifying” theories of cognition and, in fact, started the enterprise of studying and developing integrated and multi-tasking intelligence via cognitive architectures that would have led to the development of the Soar.

A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.

<span class="mw-page-title-main">Intelligent agent</span> Software agent which acts autonomously

In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. They may be simple or complex — a thermostat is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.

<span class="mw-page-title-main">History of artificial intelligence</span> Overview of the history of artificial intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols.This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

Cognitive Robotics or Cognitive Technology is a subfield of robotics concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition, consisting of Robotic Process Automation, Artificial Intelligence, Machine Learning, Deep Learning, Optical Character Recognition, Image Processing, Process Mining, Analytics, Software Development and System Integration.

A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.

A "production system " is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection.

<span class="mw-page-title-main">Outline of thought</span> Overview of and topical guide to thought

The following outline is provided as an overview of and topical guide to thought (thinking):

Nouvelle artificial intelligence (AI) is an approach to artificial intelligence pioneered in the 1980s by Rodney Brooks, who was then part of MIT artificial intelligence laboratory. Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world," instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.

In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.

Deliberative agent is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions are made via symbolic reasoning".

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

References