DARPA Global autonomous language exploitation program

Last updated

The Global Autonomous Language Exploitation (GALE) program was funded by DARPA starting in 2005 to develop technologies for automatic information extraction from multilingual newscasts, documents and other forms of communication. The program encompassed three main challenges: automatic speech recognition, machine translation, and information retrieval. [1] The focus of the program was on recognizing speech in Mandarin and Arabic and translating it to English.

Teams led by IBM, BBN (led by John Makhoul), and SRI participated in the program. [2] Both ICSI and University of Washington participated in the program under the SRI team.

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Machine translation, sometimes referred to by the abbreviation MT, is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another.

Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.

<span class="mw-page-title-main">Raytheon BBN</span> American research and development company

Raytheon BBN is an American research and development company, based next to Fresh Pond in Cambridge, Massachusetts, United States.

<span class="mw-page-title-main">SRI International</span> American scientific research institute (founded 1946)

SRI International (SRI) is an American nonprofit scientific research institute and organization headquartered in Menlo Park, California. The trustees of Stanford University established SRI in 1946 as a center of innovation to support economic development in the region.

<span class="mw-page-title-main">Google Translate</span> Multilingual neural machine translation service

Google Translate is a multilingual neural machine translation service developed by Google to translate text, documents and websites from one language into another. It offers a website interface, a mobile app for Android and iOS, and an API that helps developers build browser extensions and software applications. As of 2022, Google Translate supports 133 languages at various levels, and as of April 2016, claimed over 500 million total users, with more than 100 billion words translated daily, after the company stated in May 2013 that it served over 200 million people daily.

<span class="mw-page-title-main">AI winter</span> Period of reduced funding and interest in AI research

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

Frederick Jelinek was a Czech-American researcher in information theory, automatic speech recognition, and natural language processing. He is well known for his oft-quoted statement, "Every time I fire a linguist, the performance of the speech recognizer goes up".

Machine translation is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one natural language to another.

<span class="mw-page-title-main">IraqComm</span>

IraqComm is a speech translation system that performs two-way, speech-to-speech machine translation between English and colloquial Iraqi Arabic. SRI International in Menlo Park, California led development of the IraqComm system under the DARPA program Spoken Language Communication and Translation System for Tactical Use.

The Laboratoire d'Informatique de Grenoble is the largest research laboratory of Informatics in Grenoble, France. It was created 1 January 2007, as the result of a union of the 24 research teams of the previous IMAG Institute and the INRIA Rhône-Alpes.

Mobile translation is any electronic device or software application that provides audio translation. The concept includes any handheld electronic device that is specifically designed for audio translation. It also includes any machine translation service or software application for hand-held devices, including mobile telephones, Pocket PCs, and PDAs. Mobile translation provides hand-held device users with the advantage of instantaneous and non-mediated translation from one human language to another, usually against a service fee that is, nevertheless, significantly smaller than a human translator charges.

<span class="mw-page-title-main">Siri</span> Software-based personal assistant from Apple Inc.

Siri is a virtual assistant that is part of Apple Inc.'s iOS, iPadOS, watchOS, macOS, tvOS, audioOS, and visionOS operating systems. It uses voice queries, gesture based control, focus-tracking and a natural-language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services. With continued use, it adapts to users' individual language usages, searches, and preferences, returning individualized results.

Speech translation is the process by which conversational spoken phrases are instantly translated and spoken aloud in a second language. This differs from phrase translation, which is where the system only translates a fixed and finite set of phrases that have been manually entered into the system. Speech translation technology enables speakers of different languages to communicate. It thus is of tremendous value for humankind in terms of science, cross-cultural exchange and global business.

<span class="mw-page-title-main">Alex Waibel</span> American computer scientist

Alexander Waibel is a professor of Computer Science at Carnegie Mellon University and Karlsruhe Institute of Technology. Waibel's research interests focus on speech recognition and translation and human communication signals and systems. Waibel is known for the time delay neural network (TDNN), which he developed. It is the first convolutional neural network (CNN) trained by gradient descent, using the backpropagation algorithm. Alex Waibel introduced the TDNN 1987 at ATR in Japan.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is part of a broader family of machine learning methods, which is based on artificial neural networks with representation learning. The adjective "deep" in deep learning refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

The following outline is provided as an overview of and topical guide to natural-language processing:

The IARPA Babel program developed speech recognition technology for noisy telephone conversations. The main goal of the program was to improve the performance of keyword search on languages with very little transcribed data, i.e. low-resource languages. Data from 26 languages was collected with certain languages being held-out as "surprise" languages to test the ability of the teams to rapidly build a system for a new language.

<span class="mw-page-title-main">Speechmatics</span>

Speechmatics is a technology company based in Cambridge, England, which develops automatic speech recognition software (ASR) based on recurrent neural networks and statistical language modelling. Speechmatics was originally named Cantab Research Ltd when founded in 2006 by speech recognition specialist Dr. Tony Robinson.

References

  1. "GALE - Global Autonomous Language Exploitation" . Retrieved 26 July 2017.
  2. Anderson, Nate (November 9, 2006). "Defense Department funds massive speech recognition and translation program" . Retrieved 26 July 2017.