Teragram Corporation

Last updated
Teragram Corporation
Industrysoftware company
Founded1997
FounderEmmanuel Roche
Yves Schabes
Headquarters Cary, North Carolina
Area served
North Carolina
Massachusetts
Services computational linguistics
multilingual natural language processing
Subsidiaries SAS Institute (2008)

Teragram Corporation is a fully owned subsidiary of SAS Institute, headquartered in Cary, North Carolina, USA. Teragram is based in Cambridge, Massachusetts and specializes in the application of computational linguistics to multilingual natural language processing.

Teragram's technology is licensed to public search engines such as Ask.com and Yahoo!, to media companies including the New York Times and the Tribune Company, and to original equipment manufacturer (OEM) customers such as Fast Search & Transfer and Verity.

Teragram was founded by Emmanuel Roche and Yves Schabes in 1997 and acquired by SAS Institute in 2008. [1]

Its major competitor is Inxight. [2]

Notes

  1. Kris Kanaracus, "SAS buys natural language processing vendor Teragram", InfoWorld, May 17, 2008
  2. "Linguistic modules from InXight and Teragram are the two most widely used in enterprise search engines," Roland Wang, "Enterprise Search: The Next Frontier" (December 01, 2004), Dr. Dobb's Journal.


Related Research Articles

A search engine is an information retrieval system designed to help find information stored on a computer system. The search results are usually presented in a list and are commonly called hits. Search engines help to minimize the time required to find information and the amount of information which must be consulted, akin to other techniques for managing information overload.

Search engine optimization (SEO) is the process of improving the quality and quantity of website traffic to a website or a web page from search engines. SEO targets unpaid traffic rather than direct traffic or paid traffic. Unpaid traffic may originate from different kinds of searches, including image search, video search, academic search, news search, and industry-specific vertical search engines.

SAS Institute American IT and analytics company

SAS Institute is an American multinational developer of analytics software based in Cary, North Carolina. SAS develops and markets a suite of analytics software, which helps access, manage, analyze and report on data to aid in decision-making. The company is the world's largest privately held software business and its software is used by most of the Fortune 500.

SAS (software) Statistical software

SAS is a statistical software suite developed by SAS Institute for data management, advanced analytics, multivariate analysis, business intelligence, criminal investigation, and predictive analytics.

In text retrieval, full-text search, sometimes referred to as free-text-search refers to techniques for searching a single computer-stored document or a collection in a full-text database. Full-text search is distinguished from searches based on metadata or on parts of the original texts represented in databases.

An applicant tracking system (ATS) is a software application that enables the electronic handling of recruitment and hiring needs. An ATS can be implemented or accessed online at enterprise- or small-business levels, depending on the needs of the organization; free and open-source ATS software is also available. An ATS is very similar to customer relationship management (CRM) systems, but are designed for recruitment tracking purposes. In many cases they filter applications automatically based on given criteria such as keywords, skills, former employers, years of experience and schools attended. This has caused many to adapt resume optimization techniques similar to those used in search engine optimization when creating and formatting their résumé.

Unstructured data is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated in documents.

Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.

World Programming System

The World Programming System, also known as WPS Analytics or WPS, is a software product developed by a company called World Programming.

Apache Solr Open-source enterprise-search platform

Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.

Powerset was an American company based in San Francisco, California, that, in 2006, was developing a natural language search engine for the Internet. On July 1, 2008, Powerset was acquired by Microsoft for an estimated $100 million.

MetaCarta is a software company that developed one of the first search engines to use a map to find unstructured documents. The product uses natural language processing to georeference text for customers in defense, intelligence, homeland security, law enforcement, oil and gas companies, and publishing. The company was founded in 1999 and was acquired by Nokia in 2010. Nokia subsequently spun out the enterprise products division and the MetaCarta brand to Qbase, now renamed to Finch.

A concept search is an automated information retrieval method that is used to search electronically stored unstructured text for information that is conceptually similar to the information provided in a search query. In other words, the ideas expressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query.

Natural-language user interface is a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications.

The Ubiquitous Knowledge Processing Lab is a research lab at the Department of Computer Science at the Technische Universität Darmstadt. It was founded in 2006 by Iryna Gurevych.

In-database processing, sometimes referred to as in-database analytics, refers to the integration of data analytics into data warehousing functionality. Today, many large databases, such as those used for credit card fraud detection and investment bank risk management, use this technology because it provides significant performance improvements over traditional methods.

The following outline is provided as an overview of and topical guide to natural language processing:

Sketch Engine

Sketch Engine is a corpus manager and text analysis software developed by Lexical Computing Limited since 2003. Its purpose is to enable people studying language behaviour to search large text collections according to complex and linguistically motivated queries. Sketch Engine gained its name after one of the key features, word sketches: one-page, automatic, corpus-derived summaries of a word's grammatical and collocational behaviour. Currently, it supports and provides corpora in 90+ languages.

Semantic Scholar is an artificial-intelligence backed search engine for academic publications developed at the Allen Institute for AI and publicly released in November 2015. It uses advances in natural language processing to provide summaries for scholarly papers. The Semantic Scholar team is actively researching the use of artificial-intelligence in natural language processing, machine learning, Human-Computer interaction, and information retrieval.