Lawrence Rabiner

Last updated
Lawrence R. Rabiner
Born (1943-09-28) September 28, 1943 (age 80)
Nationality American
Alma mater Massachusetts Institute of Technology
Scientific career
Fields Electrical engineering
Institutions Rutgers University
University of California, Santa Barbara
Doctoral advisor Kenneth N. Stevens

Lawrence R. Rabiner (born 28 September 1943) is an electrical engineer working in the fields of digital signal processing and speech processing; in particular in digital signal processing for automatic speech recognition. He has worked on systems for AT&T Corporation for speech recognition.

Contents

He holds a joint academic appointment between Rutgers University and the University of California, Santa Barbara.

Education

Research interests

Life

Rabiner was born in Brooklyn, New York, in 1943. During his studies at MIT, he participated in the cooperative program at AT&T Bell Laboratories, during which he worked on digital circuit design and binaural hearing. After obtaining his PhD in 1967, he joined AT&T Bell Laboratories' research division in Murray Hill, NJ as a member of technical staff. He was promoted to supervisor in 1972, department head in 1985, director in 1990, and functional vice-president in 1995. He joined the newly created AT&T Labs - Research in 1996 as director of the Speech and Image Processing Services Research Laboratory. He was promoted vice-president of Research in 1998, succeeding Sandy Fraser, where he managed broad programs in communication, computing, and information sciences. He retired from AT&T in 2002 and joined the department of electrical engineering at Rutgers University, with a joint appointment at the University of California, Santa Barbara.

Rabiner pioneered a range of novel algorithms for digital filtering and digital spectrum analysis. The most well known of these algorithms are the Chirp z-Transform method (CZT) of spectral analysis, [1] a range of optimal FIR (finite impulse response) digital filter design methods [2] based on linear programming [3] and Chebyshev approximation methods, and a class of decimation/interpolation methods for digital sampling rate conversion. In the area of speech processing, Rabiner has made contributions to the fields of pitch detection, [4] speech synthesis and speech recognition. Rabiner built one of the first digital speech synthesizers that was able to convert arbitrary text to intelligible speech. In the area of speech recognition, Rabiner was a major contributor to the creation of the statistical method of representing speech that is known as hidden Markov modeling (HMM). Rabiner was the first to publish the scaling algorithm for the Forward–Backward method of training of HMM recognizers. His research showed how to successfully implement an HMM system based on either discrete or continuous density parameter distributions. His tutorial paper on HMM is highly cited. [5] Rabiner's research resulted in a series of speech recognition systems that went into deployment by AT&T to enable automation of a range of ‘operator services’ that previously had been carried out using live operators. One such system, called the Voice Recognition Call Processing (VRCP) system, automated a small vocabulary recognition system (5 active words) with word spotting and barge-in capability. It resulted in savings of several hundred millions of dollars annually for AT&T.

Awards and recognitions

Recent journal articles

Related Research Articles

Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.

Speech processing is the study of speech signals and the processing methods of signals. The signals are usually processed in a digital representation, so speech processing can be regarded as a special case of digital signal processing, applied to speech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. Different speech processing tasks include speech recognition, speech synthesis, speaker diarization, speech enhancement, speaker recognition, etc.

<span class="mw-page-title-main">Signal processing</span> Field of electrical engineering

Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality, and to also detect or pinpoint components of interest in a measured signal.

Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.

Linear predictive coding (LPC) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model.

<span class="mw-page-title-main">Digital filter</span> Device for suppressing part of a discretely-sampled signal

In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is typically an electronic circuit operating on continuous-time analog signals.

Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to an acceptable degree.

The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This is done especially in the context of Markov information sources and hidden Markov models (HMM).

Ronald W. Schafer is an American electrical engineer notable for his contributions to digital signal processing.

Line spectral pairs (LSP) or line spectral frequencies (LSF) are used to represent linear prediction coefficients (LPC) for transmission over a channel. LSPs have several properties that make them superior to direct quantization of LPCs. For this reason, LSPs are very useful in speech coding.

<span class="mw-page-title-main">Thomas Huang</span> Chinese-American engineer and computer scientist (1936–2020)

Thomas Shi-Tao Huang was a Chinese-born American computer scientist, electrical engineer, and writer. He was a researcher and professor emeritus at the University of Illinois at Urbana-Champaign (UIUC). Huang was one of the leading figures in computer vision, pattern recognition and human computer interaction.

James Loton Flanagan was an American electrical engineer. He was Rutgers University's vice president for research until 2004. He was also director of Rutgers' Center for Advanced Information Processing and the Board of Governors Professor of Electrical and Computer Engineering. He is known for co-developing adaptive differential pulse-code modulation (ADPCM) with P. Cummiskey and Nikil Jayant at Bell Labs.

Ali Naci Akansu is a Turkish-American professor of electrical & computer engineering and scientist in applied mathematics.

Fumitada Itakura is a Japanese scientist. He did pioneering work in statistical signal processing, and its application to speech analysis, synthesis and coding, including the development of the linear predictive coding (LPC) and line spectral pairs (LSP) methods.

Rui José Pacheco de Figueiredo was an electrical engineer, mathematician, computer scientist, and a professor of electrical engineering, computer engineering, and applied mathematics at the University of California, Irvine.

<span class="mw-page-title-main">Yasuo Matsuyama</span>

Yasuo Matsuyama is a Japanese researcher in machine learning and human-aware information processing.

Palghat P. Vaidyanathan is the Kiyo and Eiko Tomiyasu Professor of Electrical Engineering at the California Institute of Technology, Pasadena, California, USA, where he teaches and leads research in the area of signal processing, especially digital signal processing (DSP), and its applications. He has authored four books, and authored or coauthored close to six hundred papers in various IEEE journals and conferences. Prof. Vaidyanathan received his B.Tech. and M.Tech. degrees from the Institute of Radiophysics and Electronics, Science College campus of University of Kolkata, and a Ph.D. degree in Electrical Engineering from University of California Santa Barbara in 1982.

V John Mathews is an Indian-American engineer and educator who is currently a Professor of Electrical Engineering and Computer Science (EECS) at the Oregon State University, United States.

Biing Hwang "Fred" Juang is a communication and information scientist, best known for his work in speech coding, speech recognition and acoustic signal processing. He joined Georgia Institute of Technology in 2002 as Motorola Foundation Chair Professor in the School of Electrical & Computer Engineering.

References

  1. The Chirp z‑Transform Algorithm, L. R. Rabiner, R. W. Schafer and C. M. Rader, IEEE Trans. Audio and Electroacoustics, Vol. AU‑17, No. 2, pp. 86–92, June 1969
  2. Techniques for Designing Finite‑Duration Impulse‑Response Digital Filters, L. R. Rabiner, IEEE Trans. on Communication Technology, Vol. COM‑19, No. 2, pp. 188–195, April 197
  3. Linear Program Design of Finite Impulse Response (FIR) Digital Filters, L. R. Rabiner, IEEE Trans. on Audio and Electroacoustics, Vol. AU‑20, No. 4, pp. 280–288, October 1972
  4. A Comparative Performance Study of Several Pitch Detection Algorithms, L. R. Rabiner, M. J. Cheng, A. E. Rosenberg and C. A McGonegal, IEEE Trans. on Acoustics, Speech, and Signal Processing, Vol. ASSP‑24, No. 5, pp. 399–418, October 1976
  5. Lawrence R. Rabiner:a tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989, pages 257–286
  6. "IEEE Emanuel R. Piore Award Recipients" (PDF). IEEE. Archived from the original (PDF) on November 24, 2010. Retrieved March 20, 2021.
  7. "IEEE Jack S. Kilby Signal Processing Medal Recipients" (PDF). IEEE . Retrieved February 27, 2011.