Semantic Research

Last updated
Semantic AI, Inc.
FormerlySemantic Research, Inc.
Company type Private
Industry Software company
Founded2001
HeadquartersSan Diego, CA
ProductsCortex EIP
Semantica Pro
Number of employees
small business
Website www.semantic-ai.com

Semantic AI (formerly Semantic Research, Inc.) is a privately held software company headquartered in San Diego, California with offices in the National Capitol Region. Semantic AI is a Delaware C-corporation that offers patented, graph-based knowledge discovery, analysis and visualization software technology. [1] [2] Its original product is a link analysis software application called Semantica Pro, and it introduced a web-based analytical environment called the Cortex Enterprise Intelligence Platform, or Cortex EIP.

Contents

History

The SEMANTICA platform was originally conceived as a method to help biology students learn and retain knowledge about complex organic structures. Joe Faletti, Kathleen Fisher, and several colleagues in the University of California system created SemNet, a computer program used to draw a network of "concepts" connected to each other by "relations". [3] In the late 1960s, Ross Quillian and Allan Collins used the concept of semantic networks as a way of talking about the organization of human semantic memory, or memory for inter-related word concepts. [4] [5] Using SemNet, students could employ simple components to build complex networks.

See also

Related Research Articles

<span class="mw-page-title-main">Semantic network</span> Knowledge base that represents semantic relations between concepts in a network

A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples.

In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.

In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s.

Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can distinguish between three different perspectives of text mining: information extraction, data mining, and a knowledge discovery in databases (KDD) process. Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.

<span class="mw-page-title-main">Social network analysis</span> Analysis of social structures using network and graph theory

Social network analysis (SNA) is the process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes and the ties, edges, or links that connect them. Examples of social structures commonly visualized through social network analysis include social media networks, meme spread, information circulation, friendship and acquaintance networks, peer learner networks, business networks, knowledge networks, difficult working relationships, collaboration graphs, kinship, disease transmission, and sexual relationships. These networks are often visualized through sociograms in which nodes are represented as points and ties are represented as lines. These visualizations provide a means of qualitatively assessing networks by varying the visual representation of their nodes and edges to reflect attributes of interest.

Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".

Knowledge-based engineering (KBE) is the application of knowledge-based systems technology to the domain of manufacturing design and production. The design process is inherently a knowledge-intensive activity, so a great deal of the emphasis for KBE is on the use of knowledge-based technology to support computer-aided design (CAD) however knowledge-based techniques can be applied to the entire product lifecycle.

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

In network theory, link analysis is a data-analysis technique used to evaluate relationships between nodes. Relationships may be identified among various types of nodes (100k), including organizations, people and transactions. Link analysis has been used for investigation of criminal activity, computer security analysis, search engine optimization, market research, medical research, and art.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

LIONsolver is an integrated software for data mining, business intelligence, analytics, and modeling and reactive business intelligence approach. A non-profit version is also available as LIONoso.

The fields of marketing and artificial intelligence converge in systems which assist in areas such as market forecasting, and automation of processes and decision making, along with increased efficiency of tasks which would usually be performed by humans. The science behind these systems can be explained through neural networks and expert systems, computer programs that process input and provide valuable output for marketers.

The following outline is provided as an overview of and topical guide to natural-language processing:

Morten Middelfart is an American serial entrepreneur, inventor, and technologist. He is known for inventing the Lumina Analytics Radiance AI platform, as well as the TARGIT software for business intelligence and analytics. Middelfart is currently the founder/Chief Data Scientist of Lumina Analytics, Advisory CIO of Genomic Expression and founder of Social Quant. Middelfart holds seven U.S. patents for his work in business intelligence and analytics software.

OpenNN is a software library written in the C++ programming language which implements neural networks, a main area of deep learning research. The library is open-source, licensed under the GNU Lesser General Public License.

Semantic Scholar is a research tool for scientific literature powered by artificial intelligence. It is developed at the Allen Institute for AI and was publicly released in November 2015. Semantic Scholar uses modern techniques in natural language processing to support the research process, for example by providing automatically generated summaries of scholarly papers. The Semantic Scholar team is actively researching the use of artificial intelligence in natural language processing, machine learning, human–computer interaction, and information retrieval.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

The following outline is provided as an overview of and topical guide to machine learning:

<span class="mw-page-title-main">Knowledge graph</span> Type of knowledge base

In knowledge representation and reasoning, a knowledge graph is a knowledge base that uses a graph-structured data model or topology to represent and operate on data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations or abstract concepts – while also encoding the semantics or relationships underlying these entities.

References

  1. USpatent 8,700,555,Murphy, et al.,"Systems and methods for pairing of a semantic network and a knowledge sharing repository",issued April 15, 2014
  2. USpatent 9,298,702,Faletti, et al.,"Systems and methods for pairing of a semantic network and a natural language processing information extraction system",issued March 29, 2016
  3. "Online SemNet Tutorial" . Retrieved 17 May 2017.
  4. Fisher, Kathleen M (1992). Kommers, P.A.M.; Jonassen, D.H.; Mayes, J.T.; Ferreira, A. (eds.). "SemNet: A Tool for Personal Knowledge Construction". Cognitive Tools for Learning. NATO ASI Series (Series F: Computer and Systems Sciences). 81: 63–75. doi:10.1007/978-3-642-77222-1_5. ISBN   978-3-642-77224-5.
  5. Gorodetsky, Malta; Fisher, Kathleen M.; Wyman, Barbara (1994). "Generating connections and learning with SemNet, a tool for constructing knowledge networks". Journal of Science Education and Technology. 3 (3): 137–144. Bibcode:1994JSEdT...3..137G. doi:10.1007/BF01575176. S2CID   62635310.