This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
KAON (Karlsruhe ontology) is an ontology infrastructure developed by the University of Karlsruhe and the Research Center for Information Technologies in Karlsruhe. Its first incarnation was developed in 2002 and supported an enhanced version of RDF ontologies. Several tools like the graphical ontology editor OIModeler or the KAON Server were based on KAON.
There are ontology learning companion tools which take non-annotated natural language text as input: TextToOnto (KAON-based) and Text2Onto (KAON2-based). Text2Onto is based on the Probabilistic Ontology Model (POM). [1]
In 2005, the first version of KAON2 was released, offering fast reasoning support for OWL ontologies. KAON2 is not backward-compatible with KAON. KAON2 is developed as a joint effort of the Information Process Engineering (IPE) at the Research Center for Information Technologies (FZI), the Institute of Applied Informatics and Formal Description Methods (AIFB) at the University of Karlsruhe, and the Information Management Group (IMG) at the University of Manchester. [2]
KAON, TextToOnto, and Text2Onto are open source, based on Java. KAON2 is not open source, [3] but the executable can be downloaded from the KAON2 site.
The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.
In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Information science is an academic field which is primarily concerned with analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and the usage of knowledge in organizations in addition to the interaction between people, organizations, and any existing information systems with the aim of creating, replacing, improving, or understanding the information systems.
Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance between expressive power and reasoning complexity by supporting different sets of mathematical constructors.
A concept map or conceptual diagram is a diagram that depicts suggested relationships between concepts. Concept maps may be used by instructional designers, engineers, technical writers, and others to organize and structure knowledge.
The Unified Medical Language System (UMLS) is a compendium of many controlled vocabularies in the biomedical sciences. It provides a mapping structure among these vocabularies and thus allows one to translate among the various terminology systems; it may also be viewed as a comprehensive thesaurus and ontology of biomedical concepts. UMLS further provides facilities for natural language processing. It is intended to be used mainly by developers of systems in medical informatics.
In information science, an upper ontology is an ontology that consists of very general terms that are common across all domains. An important function of an upper ontology is to support broad semantic interoperability among a large number of domain-specific ontologies by providing a common starting point for the formulation of definitions. Terms in the domain ontology are ranked under the terms in the upper ontology, e.g., the upper ontology classes are superclasses or supersets of all the classes in the domain ontologies.
Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.
Rudi Studer is a German computer scientist and professor emeritus at KIT, Germany. He served as head of the knowledge management research group at the Institute AIFB and one of the directors of the Karlsruhe Service Research Institute (KSRI). He is a former president of the Semantic Web Science Association, an STI International Fellow, and a member of numerous programme committees and editorial boards. He was one of the inaugural editors-in-chief of the Journal of Web Semantics, a position he held until 2007. He is a co-author of the "Semantic Wikipedia" proposal which led to the development of Wikidata.
Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.
Data preprocessing can refer to manipulation, filtration or augmentation of data before it is analyzed, and is often an important step in the data mining process. Data collection methods are often loosely controlled, resulting in out-of-range values, impossible data combinations, and missing values, amongst other issues.
Dieter Fensel is a German researcher in the field of formal languages and the semantic web. He is University Professor at the University of Innsbruck, where he directs the Semantic Technologies Institute Innsbruck, a research center associated with the university.
In computer science, information science and systems engineering, ontology engineering is a field which studies the methods and methodologies for building ontologies, which encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities of a given domain of interest. In a broader sense, this field also includes a knowledge construction of the domain using formal ontology representations such as OWL/RDF. A large-scale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering. Ontology engineering is one of the areas of applied ontology, and can be seen as an application of philosophical ontology. Core ideas and objectives of ontology engineering are also central in conceptual modeling.
Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.
The following outline is provided as an overview of and topical guide to natural-language processing:
The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components developed specifically to provide a complete Web application framework. OSF is made available under the Apache 2 license.
Pascal Hitzler is a German American computer scientist specializing in Semantic Web and Artificial Intelligence. He is endowed Lloyd T. Smith Creativity in Engineering Chair, one of the Directors of the Institute for Digital Agriculture and Advanced Analytics (ID3A) and Director of the Center for Artificial Intelligence and Data Science (CAIDS) at Kansas State University, and the founding Editor-in-Chief of the Semantic Web journal and the IOS Press book series Studies on the Semantic Web.
Ali Sunyaev is a professor for computer science and director of the Institute of Applied Informatics and Formal Description Methods at the Karlsruhe Institute of Technology (KIT).
In knowledge representation and reasoning, a knowledge graph is a knowledge base that uses a graph-structured data model or topology to represent and operate on data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations or abstract concepts – while also encoding the semantics or relationships underlying these entities.