This article needs additional citations for verification .(June 2014) |
In computer science, a knowledge base (KB) is a set of sentences, each sentence given in a knowledge representation language, with interfaces to tell new sentences and to ask questions about what is known, where either of these interfaces might use inference. [1] It is a technology used to store complex structured data used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems.
The original use of the term knowledge base was to describe one of the two sub-systems of an expert system. A knowledge-based system consists of a knowledge-base representing facts about the world and ways of reasoning about those facts to deduce new facts or highlight inconsistencies. [2]
The term "knowledge-base" was coined to distinguish this form of knowledge store from the more common and widely used term database . During the 1970s, virtually all large management information systems stored their data in some type of hierarchical or relational database. At this point in the history of information technology, the distinction between a database and a knowledge-base was clear and unambiguous.
A database had the following properties:
The first knowledge-based systems had data needs that were the opposite of these database requirements. An expert system requires structured data. Not just tables with numbers and strings, but pointers to other objects that in turn have additional pointers. The ideal representation for a knowledge base is an object model (often called an ontology in artificial intelligence literature) with classes, subclasses and instances.
Early expert systems also had little need for multiple users or the complexity that comes with requiring transactional properties on data. The data in early expert systems was used to arrive at a specific answer, such as a medical diagnosis, the design of a molecule, or a response to an emergency. [2] Once the solution to the problem was known, there was not a critical demand to store large amounts of data back to a permanent memory store. A more precise statement would be that given the technologies available, researchers compromised and did without these capabilities because they realized they were beyond what could be expected, and they could develop useful solutions to non-trivial problems without them. Even from the beginning, the more astute researchers realized the potential benefits of being able to store, analyze, and reuse knowledge. For example, see the discussion of Corporate Memory in the earliest work of the Knowledge-Based Software Assistant program by Cordell Green et al. [3]
The volume requirements were also different for a knowledge-base compared to a conventional database. The knowledge-base needed to know facts about the world. For example, to represent the statement that "All humans are mortal", a database typically could not represent this general knowledge but instead would need to store information about thousands of tables that represented information about specific humans. Representing that all humans are mortal and being able to reason about any given human that they are mortal is the work of a knowledge-base. Representing that George, Mary, Sam, Jenna, Mike,... and hundreds of thousands of other customers are all humans with specific ages, sex, address, etc. is the work for a database. [4] [5]
As expert systems moved from being prototypes to systems deployed in corporate environments the requirements for their data storage rapidly started to overlap with the standard database requirements for multiple, distributed users with support for transactions. Initially, the demand could be seen in two different but competitive markets. From the AI and Object-Oriented communities, object-oriented databases such as Versant emerged. These were systems designed from the ground up to have support for object-oriented capabilities but also to support standard database services as well. On the other hand, the large database vendors such as Oracle added capabilities to their products that provided support for knowledge-base requirements such as class-subclass relations and rules.
The next evolution for the term "knowledge-base" was the Internet. With the rise of the Internet, documents, hypertext, and multimedia support were now critical for any corporate database. It was no longer enough to support large tables of data or relatively small objects that lived primarily in computer memory. Support for corporate web sites required persistence and transactions for documents. This created a whole new discipline known as Web Content Management.
The other driver for document support was the rise of knowledge management vendors such as HCL Notes (formerly Lotus Notes). Knowledge Management actually predated the Internet but with the Internet there was great synergy between the two areas. Knowledge management products adopted the term "knowledge-base" to describe their repositories but the meaning had a big difference. In the case of previous knowledge-based systems, the knowledge was primarily for the use of an automated system, to reason about and draw conclusions about the world. With knowledge management products, the knowledge was primarily meant for humans, for example to serve as a repository of manuals, procedures, policies, best practices, reusable designs and code, etc. In both cases the distinctions between the uses and kinds of systems were ill-defined. As the technology scaled up it was rare to find a system that could really be cleanly classified as knowledge-based in the sense of an expert system that performed automated reasoning and knowledge-based in the sense of knowledge management that provided knowledge in the form of documents and media that could be leveraged by humans. [6]
In computing, a data warehouse, also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is a core component of business intelligence. Data warehouses are central repositories of data integrated from disparate sources. They store current and historical data organized so as to make it easy to create reports, query and get insights from the data. Unlike databases, they are intended to be used by analysts and managers to help make organizational decisions.
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. Expert systems were among the first truly successful forms of AI software. They were created in the 1970s and then proliferated in the 1980s, being then widely regarded as the future of AI — before the advent of successful artificial neural networks. An expert system is divided into two subsystems: 1) a knowledge base, which represents facts and rules; and 2) an inference engine, which applies the rules to the known facts to deduce new facts, and can include explaining and debugging abilities.
Knowledge representation and reasoning is a field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks, such as diagnosing a medical condition or having a natural-language dialog. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.
The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.
A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.
Business intelligence (BI) consists of strategies, methodologies, and technologies used by enterprises for data analysis and management of business information. Common functions of BI technologies include reporting, online analytical processing, analytics, dashboard development, data mining, process mining, complex event processing, business performance management, benchmarking, text mining, predictive analytics, and prescriptive analytics.
In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.
IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation, object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency.
A decision support system (DSS) is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e., unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques. It may be applied as part of broader Model-driven engineering (MDE) concept.
A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. Knowledge-based systems were the focus of early artificial intelligence researchers in the 1980s. The term can refer to a broad range of systems. However, all knowledge-based systems have two defining components: an attempt to represent knowledge explicitly, called a knowledge base, and a reasoning system that allows them to derive new knowledge, known as an inference engine.
Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for KBE is on the use of knowledge-based technology to support computer-aided design (CAD) however knowledge-based techniques can be applied to the entire product lifecycle.
Ontotext is a software company that produces software relating to data management. Its main products are GraphDB, an RDF database; and Ontotext Platform, a general data management platform based on knowledge graphs. It was founded in 2000 in Bulgaria, and now has offices internationally. Together with the BBC, Ontotext developed one of the early large-scale industrial semantic applications, Dynamic Semantic Publishing, starting in 2010.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations".
ApexKB, is a discontinued free and open-source script for collaborative search and knowledge management. It is powered by a shared enterprise bookmarking engine that is a fork of KnowledgebasePublisher, and was publicly announced on 29 September 2008. A stable version of Jumper was publicly released under the GNU General Public License and made available on SourceForge on 26 March 2009.
The following is provided as an overview of and topical guide to databases:
A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.
The Knowledge Based Software Assistant (KBSA) was a research program funded by the United States Air Force. The goal of the program was to apply concepts from artificial intelligence to the problem of designing and implementing computer software. Software would be described by models in very high level languages (essentially equivalent to first order logic) and then transformation rules would transform the specification into efficient code. The air force hoped to be able to generate the software to control weapons systems and other command and control systems using this method. As software was becoming ever more critical to USAF weapons systems it was realized that improving the quality and productivity of the software development process could have significant benefits for the military, as well as for information technology in other major US industries.
Your database is that patient's record, including history... vital signs, drugs given,... The knowledge base... is what you learned in medical school... it consists of facts, predicates, and beliefs...