Basis Technology

Last updated
BasisTech
Company type Private
Industry Information technology
Information access
Digital forensics
Transliteration
Founded1995
Headquarters Somerville, Massachusetts, United States
Area served
Americas
Europe
Asia
Key people
Carl Hoffman (CEO, Co-Founder)
Steven Cohen (EVP/COO, Co-Founder)
Brian Carrier (CTO and GM Cyber Forensics)
Simson Garfinkel (Chief Scientist)
Junichi Hasegawa (VP Asia)
ProductsKonaSearch
Cyber Triage
Autopsy
Sleuth Kit
Subsidiaries BasisTech GK
Website http://www.basistech.com
http://www.konasearch.com
http://www.autopsy.com
http://www.cybertriage.com

BasisTech is a software company specializing in applying artificial intelligence techniques to understanding documents and unstructured data written in different languages. It has headquarters in Somerville, Massachusetts with a subsidiary office in Tokyo. Its legal name is BasisTech LLC.

Contents

The company was founded in 1995 by graduates of the Massachusetts Institute of Technology to use artificial intelligence techniques for natural language processing to help computer systems understand written human language. Its software focuses on analyzing freeform text so that applications can do a better job understanding the meaning of the words. For example, their software can identify tokens, part-of-speech, and lemmas. [1] The tools can also identify different forms of names and phrases. The name of someone, say Albert P. Jones for instance, can appear in many different ways. Some texts will call him "Al Jones", others "Mr. Jones" and others "Albert Paul Jons". [2]

Their software also performs entity extraction, that is finding words which refer to people, places, and organizations from text for uses such as due diligence, intelligence and metadata tagging. [3]

The company is best known for its Rosette product which uses Natural Language Processing techniques to improve information retrieval, text mining, search engines and other applications. The tool is used to enable search engines to search in multiple languages, [4] and match identities and dates. [5] Rosette was sold to Babel Street in 2002. [6]

BasisTech software is also used by forensic analysts to search through files for words, tokens, phrases or numbers that may be important to investigators, [7] as well as provide software (Cyber Triage) that helps organizations respond to cyberattacks. [8]

Rosette

Rosette comes as a cloud (public or on-premise) deployment or Java SDK. [9] Rosette provides a variety of natural language processing tools for unstructured text: language identification, base linguistics, entity extraction, name matching, name translation, sentiment analysis, semantic similarity, relationship extraction, topic extraction, categorization, and Arabic chat translation. [10] It can be integrated into applications to enhance financial compliance onboarding, [11] communication surveillance compliance, [12] social media monitoring, [13] cyber threat intelligence, [14] and customer feedback analysis. [15]

The Rosette Linguistics Platform is composed of these modules:

Rosette is used in both the United States government offices to support translation and by major Internet infrastructure firms like search engines. [20] [21]

Digital forensics

BasisTech develops open-source digital forensics tools, The Sleuth Kit and Autopsy , to help identify and extract clues from data storage devices like hard disks or flash cards, as well as devices such as smart phones and iPods. The open-source licensing model allows them to be used as the foundation for larger projects like a Hadoop-based tool for massively parallel forensic analysis of very large data collections.

The digital forensics tool set is used to perform analysis of file systems, new media types, new file types and file system metadata. The tools can search for particular patterns in the files allowing it to target significant files or usage profiles. It can, for instance, look for common files using hash functions and also deconstruct the data structures of the important operating system log files.

The tools are designed to be customizable with an open plugin architecture. Basis Technology helps manage a large and diverse community of developers who use the tool in investigations.

KonaSearch

BasisTech acquired KonaSearch in June 2019, [22] a startup that specializes in search for Salesforce.com and other office database repositories, which can automate the search step of business workflows. [23]

Related Research Articles

Natural language processing (NLP) is an interdisciplinary subfield of computer science and information retrieval. It is primarily concerned with giving computers the ability to support and manipulate human language. It involves processing natural language datasets, such as text corpora or speech corpora, using either rule-based or probabilistic machine learning approaches. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. To this end, natural language processing often borrows ideas from theoretical linguistics. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

<span class="mw-page-title-main">Optical character recognition</span> Computer recognition of visual text

Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo or from subtitle text superimposed on an image.

Idiolect is an individual's unique use of language, including speech. This unique usage encompasses vocabulary, grammar, and pronunciation. This differs from a dialect, a common set of linguistic characteristics shared among a group of people.

Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can distinguish between three different perspectives of text mining: information extraction, data mining, and a knowledge discovery in databases (KDD) process. Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.

Sentiment analysis is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to voice of the customer materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from marketing to customer service to clinical medicine. With the rise of deep language models, such as RoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly.

Search engine indexing is the collecting, parsing, and storing of data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, and computer science. An alternate name for the process, in the context of search engines designed to find web pages on the Internet, is web indexing.

<span class="mw-page-title-main">General Architecture for Text Engineering</span>

General Architecture for Text Engineering or GATE is a Java suite of tools originally developed at the University of Sheffield beginning in 1995 and now used worldwide by a wide community of scientists, companies, teachers and students for many natural language processing tasks, including information extraction in many languages.

Audio mining is a technique by which the content of an audio signal can be automatically analyzed and searched. It is most commonly used in the field of automatic speech recognition, where the analysis tries to identify any speech within the audio. The term ‘audio mining’ is sometimes used interchangeably with audio indexing, phonetic searching, phonetic indexing, speech indexing, audio analytics, speech analytics, word spotting, and information retrieval. Audio indexing, however, is mostly used to describe the pre-process of audio mining, in which the audio file is broken down into a searchable index of words.

<span class="mw-page-title-main">Mobile device forensics</span> Recovery of evidence from mobile devices

Mobile device forensics is a branch of digital forensics relating to recovery of digital evidence or data from a mobile device under forensically sound conditions. The phrase mobile device usually refers to mobile phones; however, it can also relate to any digital device that has both internal memory and communication ability, including PDA devices, GPS devices and tablet computers.

Lexalytics, Inc. provides sentiment and intent analysis to an array of companies using SaaS and cloud based technology. Salience 6, the engine behind Lexalytics, was built as an on-premises, multi-lingual text analysis engine. It is leased to other companies who use it to power filtering and reputation management programs. In July, 2015 Lexalytics acquired Semantria to be used as a cloud option for its technology. In September, 2021 Lexalytics was acquired by CX company InMoment.

General Sentiment, Inc. was a Long Island-based social media and news media analytics company.

The following is provided as an overview of and topical guide to databases:

The following outline is provided as an overview of and topical guide to natural-language processing:

NetOwl is a suite of multilingual text and identity analytics products that analyze big data in the form of text data – reports, web, social media, etc. – as well as structured entity data about people, organizations, places, and things.

<span class="mw-page-title-main">Sketch Engine</span> Corpus manager and text analysis software

Sketch Engine is a corpus manager and text analysis software developed by Lexical Computing CZ s.r.o. since 2003. Its purpose is to enable people studying language behaviour to search large text collections according to complex and linguistically motivated queries. Sketch Engine gained its name after one of the key features, word sketches: one-page, automatic, corpus-derived summaries of a word's grammatical and collocational behaviour. Currently, it supports and provides corpora in 90+ languages.

<span class="mw-page-title-main">Apache Tika</span> Open-source content analysis framework

Apache Tika is a content detection and analysis framework, written in Java, stewarded at the Apache Software Foundation. It detects and extracts metadata and text from over a thousand different file types, and as well as providing a Java library, has server and command-line editions suitable for use from other programming languages.

<span class="mw-page-title-main">Diffeo, Inc.</span> American knowledge discovery software company

Diffeo, Inc., is a software company that developed a collaborative intelligence text mining product for defense, intelligence and financial services customers.

<span class="mw-page-title-main">Ontotext GraphDB</span> RDF-store

Ontotext GraphDB is a graph database and knowledge discovery tool compliant with RDF and SPARQL and available as a high-availability cluster. Ontotext GraphDB is used in various European research projects.

Norconex Web Crawler is a free and open-source web crawling and web scraping Software written in Java and released under an Apache License. It can export data to many repositories such as Apache Solr, Elasticsearch, Microsoft Azure Cognitive Search, Amazon CloudSearch and more.

References

  1. "Base Linguistics".
  2. "Name Indexer - Name Match".
  3. "Entity Extractor - Entity Recognition".
  4. "Elasticsearch Plugins - Elasticsearch Enrichment".
  5. "Elasticsearch Plugins - Elasticsearch Enrichment".
  6. "Babel Street Closes Highly Successful 2022 with Rosette Acquisition". www.businesswire.com. 2023-01-10. Retrieved 2024-04-11.
  7. "Custom Solutions for Digital Forensics".
  8. "About".
  9. "Base Linguistics".
  10. "Rosette Text Analytics".
  11. "Uphold".
  12. "Société Générale".
  13. "Sensika".
  14. "A Game-Changing Threat Intelligence Platform".
  15. "Understand, Measure, and Act on Consumer Feedback".
  16. Erard, Michael (March 1, 2004). "Translation in the Era of Terror". Technology Review.
  17. Boyd, Clark (January 14, 2004). "Language tools for fight on terror". BBC News.
  18. Weiss, Todd R. (March 10, 2003). "Language analysis software aids U.S. Web search for terrorist activity". Computerworld.
  19. Profile in Boston Business Journal
  20. Hollmer, Mark (March 21, 2003). "Basis Technology turns its focus to government security". Boston Business Journal.
  21. Baker, Loren (November 30, 2004). "MSN Search Engine Uses Basis Technology for Natural Language Processing". Search Engine Journal.
  22. "Basis Technology Brings Deep Search to Salesforce".
  23. "About Us".