Noise-Predictive Maximum-Likelihood (NPML) is a class of digital signal-processing methods suitable for magnetic data storage systems that operate at high linear recording densities. It is used for retrieval of data recorded on magnetic media.
Data are read back by the read head, producing a weak and noisy analog signal. NPML aims at minimizing the influence of noise in the detection process. Successfully applied, it allows recording data at higher areal densities. Alternatives include peak detection, partial-response maximum-likelihood (PRML), and extended partial-response maximum likelihood (EPRML) detection. [1]
Although advances in head and media technologies historically have been the driving forces behind the increases in the areal recording density,[ citation needed ] digital signal processing and coding established themselves as cost-efficient techniques for enabling additional increases in areal density while preserving reliability. [1] Accordingly, the deployment of sophisticated detection schemes based on the concept of noise prediction are of paramount importance in the disk drive industry.
The NPML family of sequence-estimation data detectors arise by embedding a noise prediction/whitening process [2] [3] [4] into the branch metric computation of the Viterbi algorithm. The latter is a data detection technique for communication channels that exhibit intersymbol interference (ISI) with finite memory.
Reliable operation of the process is achieved by using hypothesized decisions associated with the branches of the trellis on which the Viterbi algorithm operates as well as tentative decisions corresponding to the path memory associated with each trellis state. NPML detectors can thus be viewed as reduced-state sequence-estimation detectors offering a range of implementation complexities. The complexity is governed by the number of detector states, which is equal to , , with denoting the maximum number of controlled ISI terms introduced by the combination of a partial-response shaping equalizer and the noise predictor. By judiciously choosing , practical NPML detectors can be devised that improve performance over PRML and EPRML detectors in terms of error rate and/or linear recording density. [2] [3] [4]
In the absence of noise enhancement or noise correlation, the PRML sequence detector performs maximum-likelihood sequence estimation. As the operating point moves to higher linear recording densities, optimality declines with linear partial-response (PR) equalization, which enhances noise and renders it correlated. A close match between the desired target polynomial and the physical channel can minimize losses. An effective way to achieve near optimal performance independently of the operating point—in terms of linear recording density—and the noise conditions is via noise prediction. In particular, the power of a stationary noise sequence , where the operator corresponds to a delay of one bit interval, at the output of a PR equalizer can be minimized by using an infinitely long predictor. A linear predictor with coefficients ,..., operating on the noise sequence produces the estimated noise sequence . Then, the prediction-error sequence given by
is white with minimum power. The optimum predictor
...
or the optimum noise-whitening filter
,
is the one that minimizes the prediction error sequence in a mean-square sense [2] [3] [4] [5] [6]
An infinitely long predictor filter would lead to a sequence detector structure that requires an unbounded number of states. Therefore, finite-length predictors that render the noise at the input of the sequence detector approximately white are of interest.
Generalized PR shaping polynomials of the form
,
where is a polynomial of order S and the noise-whitening filter has a finite order of , give rise to NPML systems when combined with sequence detection [2] [3] [4] [5] [6] In this case, the effective memory of the system is limited to
,
requiring a -state NPML detector if no reduced-state detection is employed.
As an example, if
then this corresponds to the classical PR4 signal shaping. Using a whitening filter , the generalized PR target becomes
,
and the effective ISI memory of the system is limited to
symbols. In this case, the full-state NMPL detector performs maximum likelihood sequence estimation (MLSE) using the -state trellis corresponding to .
The NPML detector is efficiently implemented via the Viterbi algorithm, which recursively computes the estimated data sequence. [2] [3] [4] [5] [6]
where denotes the binary sequence of recorded data bits and z(D) the signal sequence at the output of the noise whitening filter .
Reduced-state sequence-detection schemes [7] [8] [9] have been studied for application in the magnetic-recording channel [2] [4] and the references therein. For example, the NPML detectors with generalized PR target polynomials
can be viewed as a family of reduced-state detectors with embedded feedback. These detectors exist in a form in which the decision-feedback path can be realized by simple table look-up operations, whereby the contents of these tables can be updated as a function of the operating conditions. [2] Analytical and experimental studies have shown that a judicious tradeoff between performance and state complexity leads to practical schemes with considerable performance gains. Thus, reduced-state approaches are promising for increasing linear density.
Depending on the surface roughness and particle size, particulate media might exhibit nonstationary data-dependent transition or medium noise rather than colored stationary medium noise. Improvements o\in the quality of the readback head as well as the incorporation of low-noise preamplifiers may render the data-dependent medium noise a significant component of the total noise affecting performance. Because medium noise is correlated and data-dependent, information about noise and data patterns in past samples can provide information about noise in other samples. Thus, the concept of noise prediction for stationary Gaussian noise sources developed in [2] [6] can be naturally extended to the case where noise characteristics depend highly on local data patterns. [1] [10] [11] [12]
By modeling the data-dependent noise as a finite-order Markov process, the optimum MLSE for channels with ISI has been derived. [11] In particular, it when the data-dependent noise is conditionally Gauss–Markov, the branch metrics can be computed from the conditional second-order statistics of the noise process. In other words, the optimum MLSE can be implemented efficiently by using the Viterbi algorithm, in which the branch-metric computation involves data-dependent noise prediction. [11] Because the predictor coefficients and prediction error both depend on the local data pattern, the resulting structure has been called a data-dependent NPML detector. [1] [12] [10] Reduced-state sequence detection schemes can be applied to data-dependent NPML, reducing implementation complexity.
NPML and its various forms represent the core read-channel and detection technology used in recording systems employing advanced error-correcting codes that lend themselves to soft decoding, such as low-density parity check (LDPC) codes. For example, if noise-predictive detection is performed in conjunction with a maximum a posteriori (MAP) detection algorithm such as the BCJR algorithm [13] then NPML and NPML-like detection allow the computation of soft reliability information on individual code symbols, while retaining all the performance advantages associated with noise-predictive techniques. The soft information generated in this manner is used for soft decoding of the error-correcting code. Moreover, the soft information computed by the decoder can be fed back again to the soft detector to improve detection performance. In this way it is possible to iteratively improve the error-rate performance at the decoder output in successive soft detection/decoding rounds.
Beginning in the 1980s several digital signal-processing and coding techniques were introduced into disk drives to improve the drive error-rate performance for operation at higher areal densities and for reducing manufacturing and servicing costs. In the early 1990s, partial-response class-4 [14] [15] [16] (PR4) signal shaping in conjunction with maximum-likelihood sequence detection, eventually known as PRML technique [14] [15] [16] replaced the peak detection systems that used run-length-limited (RLL) (d,k)-constrained coding. This development paved the way for future applications of advanced coding and signal-processing techniques [1] in magnetic data storage.
NPML detection was first described in 1996 [4] [17] and eventually found wide application in HDD read channel design. The “noise predictive” concept was later extended to handle autoregressive (AR) noise processes and autoregressive moving-average (ARMA) stationary noise processes [2] The concept was extended to include a variety of non-stationary noise sources, such as head, transition jitter and media noise; [10] [11] [12] it was applied to various post-processing schemes. [18] [19] [20] Noise prediction became an integral part of the metric computation in a wide variety of iterative detection/decoding schemes.
The pioneering research work on partial-response maximum-likelihood (PRML) and noise-predictive maximum-likelihood (NPML) detection and its impact on the industry were recognized in 2005 [21] by the European Eduard Rhein Foundation Technology Award. [22]
NPML technology were first introduced into IBM's line of HDD products in the late 1990s. [23] Eventually, noise-predictive detection became a de facto standard and in its various instantiations became the core technology of the read channel module in HDD systems. [24] [25]
In 2010, NPML was introduced into IBM's Linear Tape Open (LTO) tape drive products and in 2011 in IBM's enterprise-class tape drives.[ citation needed ]
In digital transmission, the number of bit errors is the numbers of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.
In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it — with unobservable ("hidden") states. As part of the definition, HMM requires that there be an observable process whose outcomes are "influenced" by the outcomes of in a known way. Since cannot be observed directly, the goal is to learn about by observing HMM has an additional requirement that the outcome of at time must be "influenced" exclusively by the outcome of at and that the outcomes of and at must be conditionally independent of at given at time
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especially in the context of Markov information sources and hidden Markov models (HMM).
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
In computer data storage, partial-response maximum-likelihood (PRML) is a method for recovering the digital data from the weak analog read-back signal picked up by the head of a magnetic disk drive or tape drive. PRML was introduced to recover data more reliably or at a greater areal-density than earlier simpler schemes such as peak-detection. These advances are important because most of the digital data in the world is stored using magnetic storage on hard disk or tape drives.
In information theory, turbo codes are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications and in satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete with low-density parity-check (LDPC) codes, which provide similar performance.
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step.
In signal processing, a matched filter is obtained by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.
The ring-imaging Cherenkov, or RICH, detector is a device for identifying the type of an electrically charged subatomic particle of known momentum, that traverses a transparent refractive medium, by measurement of the presence and characteristics of the Cherenkov radiation emitted during that traversal. RICH detectors were first developed in the 1980s and are used in high energy elementary particle-, nuclear- and astro-physics experiments.
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using a convolutional code or trellis code.
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction.
Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal processing. The magnetic and gravitational fields emanating from the Earth's interior hold essential information concerning seismic activities and the internal structure. Hence, detection and analysis of the electric and Magnetic fields is very crucial. As the Electromagnetic and gravitational waves are multi-dimensional signals, all the 1-D transformation techniques can be extended for the analysis of these signals as well. Hence this article also discusses multi-dimensional signal processing techniques.
Multiuser detection deals with demodulation of the mutually interfering digital streams of information that occur in areas such as wireless communications, high-speed data transmission, DSL, satellite communication, digital television, and magnetic recording. It is also being currently investigated for demodulation in low-power inter-chip and intra-chip communication. Multiuser detection encompasses both receiver technologies devoted to joint detection of all the interfering signals or to single-user receivers which are interested in recovering only one user but are robustified against multiuser interference and not just background noise.
In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
3D sound localization refers to an acoustic technology that is used to locate the source of a sound in a three-dimensional space. The source location is usually determined by the direction of the incoming sound waves and the distance between the source and sensors. It involves the structure arrangement design of the sensors and signal processing techniques.
Evangelos Eleftheriou is a Greek electrical engineer. He is an IBM Fellow and was responsible for the Cloud and Computing Infrastructure department at the IBM Research – Zurich laboratory in Rüschlikon, Switzerland.
Two-dimensional magnetic recording (TDMR) is a technology introduced in 2017 in hard disk drives (HDD) used for computer data storage. Most of the world's data is recorded on HDDs, and there is continuous pressure on manufacturers to create greater data storage capacity in a given HDD form-factor and for a given cost. In an HDD, data is stored using magnetic recording on a rotating magnetic disk and is accessed through a write-head and read-head. TDMR allows greater storage capacity by advantageously combining signals simultaneously from multiple read-back heads to enhance the recovery of one or more data-tracks. In this manner, data can be stored with higher areal-density on the disks thus providing higher capacity in each HDD. TDMR is a read-back technology and thus applies equally well to future recording (writing) technologies such as Heat-Assisted Magnetic Recording (HAMR) and Microwave-Assisted Magnetic Recording (MAMR).