Adaptive noise cancelling is a signal processing technique that is highly effective in suppressing additive interference or noise corrupting a received target signal at the main or primary sensor in certain common situations where the interference is known and is accessible but unavoidable and where the target signal and the interference are unrelated, that is, uncorrelated [1] [2] [3] . Examples of such situations include:
Conventional signal processing techniques pass the received signal, consisting of the target signal and the corrupting interference, through a filter that is designed to minimise the effect of the interference. The objective of optimal filtering is to maximise the signal-to-noise ratio [4] at the receiver output or to produce the optimal estimate of the target signal in the presence of interference (Wiener filter).
In contrast, adaptive noise cancelling relies on a second sensor, usually located near the source of the known interference, to obtain a relatively pure version of the interference, free from the target signal and other interference. This second version of the interference and the sensor receiving it are called the reference. [1] [2] [5]
The adaptive noise canceller consists of a self-adjusting adaptive filter [6] [7] which automatically transforms the reference signal into an optimal estimate of the interference corrupting the target signal before subtracting it from the received signal thereby cancelling (or minimising) the effect of the interference at the noise canceller output. The adaptive filter adjusts itself continuously and automatically to minimise the residual interference affecting the target signal at its output. The power of the adaptive noise cancelling concept is that it requires no detailed a priori knowledge of the target signal or the interference. The adaptive algorithm that optimises the filter relies only on ongoing sampling of the reference input and the noise canceller output. [1] [2]
Adaptive noise cancelling can be effective even when the target signal and the interference are similar in nature and the interference is considerably stronger than the target signal. The key requirement is that the target signal and the interference are unrelated, that is uncorrelated. Meeting this requirement is normally not an issue in situations where adaptive noise cancelling is used. [1] [5]
The adaptive noise cancelling approach and the proof of the concept, the first striking demonstrations that general broadband interference can be eliminated from a target signal in practical situations using adaptive noise cancelling, were set out and demonstrated during 1971–72 at the Adaptive Systems Laboratory at the Stanford School of Electrical Engineering by Professor Bernard Widrow and John Kaunitz, an Australian doctoral student, and documented in the latter's PhD dissertation Adaptive Filtering of Broadband signals as Applied to Noise Cancelling (1972) [1] (also available here). The work was also published as a Stanford Electronics Labs report by Kaunitz and Widrow, Noise Cancelling Filter Study (1973). [5] The initial proof of concept demonstrations of the noise cancelling concept (see below) for eliminating broadband interference were carried out by means of a prototype hybrid adaptive signal processor designed and built by Kaunitz and described in a Stanford Electronics Labs report General Purpose Hybrid Adaptive Signal Processor (1971) . [7]
The adaptive noise canceller configuration diagram above shows the target signal s(t) present at the primary sensor and the interference or noise source n(t) and its manifestations np(t) and nr(t) at the primary and reference sensors respectively. [1] [2] [3] [5]
As np(t) and nr(t) are the manifestations of the same interference source in different locations, these will usually differ significantly in an unpredictable fashion due to different transmission paths through the environment to the two sensors. So the reference nr(t) cannot be used directly to cancel or reduce the interference corrupting the target signal np(t). It must first be appropriately processed before it can be used to minimise, by subtraction, the overall effect of the interference at the noise canceller output.
An adaptive noise canceller is based on a self-optimising adaptive filter that has a variable transform function shaped by adjustable parameters called weights. [3] [8] [9] Using an iterative adaptive algorithm, the adaptive filter transforms the reference nr(t) into an optimal estimate ñp(t) of the interference np(t) corrupting the target signal and cancelling the latter by subtraction, whilst leaving the target signal unchanged. So the output of the adaptive noise canceller shown above is:
z(t) = s(t)+np(t)-ñp(t). [1] [2] [5]
The power of the adaptive noise cancelling approach stems from the fact that the algorithm driving the iterative adjustment of weights in an adaptive filter is a simple, fully automatic iterative process that relies only on an ongoing sequence of sampling measurements of the noise canceller output and the reference r(t) = nr(t). For example, the LMS (Least Means Square) algorithm in the context of the usual tapped-delay-line digital adaptive filter (see below) leads to:
Wk+1 = Wk - μzkRk = Wk - μzkNr,k
where the vector Wk represents the set of filter weights at the kth iteration and the vector Rk represents the last set of samples of the reference which are the weight inputs. The adaptation constant μ determines the rate of adaptation and the stability of the optimal configuration.
Apart from the availability of a suitable reference signal the only other essential requirement is that the target signal and the corrupting noise source are unrelated, that is uncorrelated, so that for all values of , where the bar represents time averaging. [1] [5]
Adaptive noise cancelling does not require detailed a priori knowledge of the interference or the target signal. However, the physical characteristics of the adaptive filter must be generally suitable for producing an adjustable frequency response or transfer function that will transform the reference signal nr(t) into a close estimate of the corrupting interference, ñp(t), through the iterative adjustment of the filter weights. [1] [5]
A 1975 paper published in the Proceedings of the IEEE by Widrow et al., Adaptive Noise Cancelling: Principles and Applications [2] , is now the generally referenced introductory publication in the field. This paper sets out the basic concepts of adaptive noise cancelling and summarises subsequent early work and applications. Earlier unpublished efforts to eliminate interference using a second input are also mentioned. [2] This paper remains the main reference for the adaptive noise cancelling concept and to date has been cited by over 2800 scientific papers and 380 patents. The topic is also covered by a number of more recent books. [3] [4]
Adaptive noise cancelling evolved from the pioneering work on adaptive systems, adaptive filtering and signal processing carried out at the Adaptive Systems Laboratories in the School of Electrical Engineering at Stanford University during the 1960s and 70's under the leadership of Professor Bernard Widrow. [1] [2] Adaptive filters incorporate adjustable parameters called weights, controlled by iterative adaptive algorithms, to produce a desired transfer function.
Adaptive filters were originally conceived to produce the optimal filters prescribed by optimal filter theory during a training phase [6] by adjusting the filter weights according to an iterative adaptive algorithm such as the Least-Means-Square (LMS) algorithm. During the training phase, the filter is presented with a known input and a training signal called a desired response.
The filter weights are adjusted by the adaptive algorithm, which is designed to minimise the mean-squared-error ξ, the difference between the adaptive filter output and the desired response: [6] [7]
where W represents the set of weights in vector notation and X(t) the set of weight inputs so y(t) = X(t)TW.
The above expression shows ξ to be a quadratic function of the weight vector W, a multi-dimensional paraboloid with a single minimum that can be reached from any point by descending along the gradient. Gradient descent algorithms, such as the original Least Means Squared algorithm, iteratively adjust the filter weights in small steps opposite the gradient. In the case of the usual digital tapped delay line filter, the vector Xk is simply the last set of samples of the filter input x(t) and the LMS algorithm results in:
Wk+1 = Wk - μekXk
where k represents the kth step in the iteration process, μ is the adaptation constant that controls the rate and stability of the adaptation process and ek and Xk are samples of the error and the input vector respectively
At the completion of the training phase, the adaptive filter has been optimised to produce the desired optimal transfer function. In its normal operating phase such an optimised adaptive filter is then used passively to process received signals to improve the signal-to-noise ratio at the filter output under the assumed conditions. The theory and analysis of adaptive filters is largely based on this concept, model and terminology and took place before the introduction of the adaptive noise cancelling concept around 1970.
Adaptive noise cancelling [1] [2] [8] is an innovation that represents a fundamentally different configuration and application of adaptive filtering in those common situations where a reference signal is available by:
Whilst the discussion of adaptive noise cancelling reflects the above terminology, it is clear from the above diagrams that the two are equivalent and the previously developed extensive adaptive filter theory therefore continues to apply in both situations.
In the adaptive noise cancelling situation the received signal does not pass through the adaptive filter but instead becomes the 'desired response' for adaptation purposes. Since the adaptation process will aim to minimise the error, it follows that, in the noise canceller configuration, the adaptation process in effect aims to minimise the overall signal power at the noise canceller output - the error. So the adaptive filtering of the reference actually strives to suppress the overall signal power at the noise canceller output.
This counterintuitive concept can be understood by keeping in mind that the target signal s(t) and the interference n(t) are uncorrelated. So, in aiming to minimise the error, using a reference as input, which is related only to the interference, the best the adaptive filter can do, in generating an optimal estimate of the primary input, the desired response, is to generate the optimal estimate of the interference at the primary sensor ñp(t). This will result in minimising the overall effect of the interference at the noise canceller output whilst leaving the target signal s(t) unchanged.
The iterative adaptive algorithms used in adaptive filtering require only an ongoing sequence of sampling measurements at the weight inputs and the error. As digital adaptive filters are in effect tapped-delay-line filters, the operation of an adaptive noise canceller requires only on an ongoing sequence of sampling measurements of the reference and the noise canceller output.
Adaptive filtering theory was developed in the domain of stochastic signals and statistical signal processing. However, repetitive interference typical of noise cancelling applications, such as machinery noise or ECGs, are more appropriately treated as bounded time-varying signals. A comprehensive analysis of adaptive filters when applied to stochastic signals is presented by Widrow and Stearns in their book Adaptive Signal Processing [3] . In this context averaging is interpreted as statistical expectation. An analysis of noise cancelling where s(t) and n(t) are assumed to be bounded deterministic signals was presented by Kaunitz [1] in his PhD dissertation, where time averaging is used.
The first practical demonstration of the adaptive noise cancelling concept, typical of general practical situations involving broadband signals, was carried out in 1971 at the Stanford School of Electrical Engineering Adaptive Systems Laboratory by Kaunitz [1] using a prototype hybrid adaptive signal processor. [7] The ambient noise from the output of a microphone used by a speaker (the primary sensor) in a very noisy room was largely eliminated using adaptive noise cancellation.
A triangular signal, representing a typical broadband signal, emitted by a loudspeaker situated in the room, was used as the interfering noise source. A second microphone situated near this loudspeaker served to provide the reference input. The output of the noise canceller was channelled to the earphones of a listener outside the room. [1] [5]
The adaptive filter used in these experiments was a hybrid adaptive filter consisting of a preprocessor of 16 RC-filter circuits which provided the inputs to 16 digitally controlled analogue amplifiers as weights that were summed as a linear combiner to produce the adaptive filter output. This linear combiner [3] interfaced to a small HP 2116B digital computer that ran a version of the LMS algorithm. [7]
The experimental arrangement used by Kaunitz in the photo below shows the loudspeaker emitting the interference, the two microphones used to provide the primary and reference signals, the equipment rack, containing the hybrid adaptive filter and the digital interface, and the HP 2116B minicomputer on the right of the picture. (Only some of the equipment in the photo is part of the adaptive noise cancelling demonstration). [1] [5]
The noise canceller effectively reduced the ambient noise overlaying the speech signal from an initially almost overwhelming level to barely audible and successfully re-adapted to the change in frequency of the triangular noise source and to changes in the environment when people moved around in the room. Recordings of these demonstrations are still available here and here.
The second application of this original noise canceller was to process ECGs from heart transplant animals studied by the pioneering heart transplant team at the Stanford Medical Centre at the time led by Dr Norman Shumway. Data was provided by Drs Eugene Dong and Walter B Cannon in the form of a multi-track magnetic tape recording [1] [5] of electrocardiograms.
In heart transplant recipients the part of the heart stem that contains the recipient's pacemaker (called the sinoatrial or SA node) remains in place and continues to fire controlled by the brain and the nervous system. Normally this pacemaker controls the rate at which the heart is beating by triggering the atrioventricular (AV) nodes and thus controlling heart rate to respond to the demands of the body. (See diagram below). In normal patients, this represents a feedback loop, but in transplant patients, the connection between the remnant SA node and the implanted AV node is not re-established and the remnant pacemaker and the implanted heart are beating independently, at differing rates.
The behaviour of the remnant pacemaker in the open loop situation of a heart transplant patient was of considerable interest to researchers, but studying the ECG of the pacemaker (the p-wave) was made difficult because the weaker signal from the pacemaker was swamped by the signal from the implanted heart even when a bipolar catheter sensor (primary sensor) is inserted through the jugular vein close to the SA-node. (See the third trace from top in the diagram below). The noise cancelling arrangement to eliminate the effect of the donor heart from the ECG of the p-wave is shown below. [1] [5]
A reference signal was obtained through a limb-to-limb ECG of the patient (See top trace in the diagram below), which provided the main ECG of the donor heart largely free from the pacemaker p-wave. Adaptive noise cancelling was used to transform the reference into an estimate of the donor heart signal present at the primary input (see second trace from top) and used to substantially reduce the effect of the donor heart from the primary ECG (third trace), providing a substantially cleaned up version of the p-wave at the noise canceller output (see bottom trace) suitable for further study and analysis. [1] [3]
Adaptive noise cancelling techniques have found use in a wide range of situations, including the following:
In these situations, a suitable reference signal can be readily obtained by placing a sensor near the source of the interference or by other means (e.g. a version of the interfering ECG free from the target signal).
Adaptive noise cancelling can be effective even when the target signal and the interference are similar in nature and the interference is considerably stronger than the target signal. Apart from the availability of a suitable reference signal the only other critical requirement is that the target signal and the corrupting noise source are unrelated, that is uncorrelated, so that for all values of , where the bar represents time averaging. [1]
Adaptive noise cancelling does not require detailed a priori knowledge of the interference or the target signal. However, the characteristics of the adaptive filter must be generally suitable for producing an adjustable frequency response or transfer function that is able to transform the reference signal nr(t) into an estimate of the corrupting interference, ñp(t), through the iterative adjustment of the filter weights. The interference in the above examples are usually irregular repetitive signals. Although the theory of adaptive filtering does not rely on this as an assumption, in practice this characteristic is very helpful as it limits the need for the adaptive filter to compensate for time shifts between the versions of the interference at the primary and reference sensors to appropriately compensate for phase shifts. [1] [2]
Adaptive Noise Cancelling is not to be confused with active noise control. These terms refer to different areas of scientific investigation in two different disciplines and the term noise has a different meaning in the two contexts.
Active noise control is a method in acoustics to reduce unwanted sound in physical spaces and an area of research that preceded the development of adaptive noise cancelling. The term noise is used here with its common meaning of unwanted audible sound.
As explained above, adaptive noise cancelling is a technique used in communication and control to reduce the effect of additive interference corrupting an electric or electromagnetic target signal. In this context noise refers to such interference and the two terms are used interchangeably. In the book by Widrow and Stearns [3] the relevant chapter is in fact entitled "Adaptive Interference Cancelling". However, adaptive noise cancelling is the term that prevailed and is now in common usage.
After its development in signal processing, the adaptive noise-cancelling approach was also adopted in active noise control, for example in some (but not all), noise-cancelling headphones. So the two areas in fact significantly intersect. Nevertheless, active noise control is just one of the many applications of adaptive noise cancelling and, conversely, adaptive noise cancelling is just one technique used in the field of active noise control.
In signal processing, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is typically an electronic circuit operating on continuous-time analog signals.
An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization algorithm. Because of the complexity of the optimization algorithms, almost all adaptive filters are digital filters. Adaptive filters are required for some applications because some parameters of the desired processing operation are not known in advance or are changing. The closed loop adaptive filter uses feedback in the form of an error signal to refine its transfer function.
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
In computer data storage, partial-response maximum-likelihood (PRML) is a method for recovering the digital data from the weak analog read-back signal picked up by the head of a magnetic disk drive or tape drive. PRML was introduced to recover data more reliably or at a greater areal-density than earlier simpler schemes such as peak-detection. These advances are important because most of the digital data in the world is stored using magnetic storage on hard disk or tape drives.
The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance. For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves. The related processing method is called array signal processing. A third examples includes chemical sensor arrays, which utilize multiple chemical sensors for fingerprint detection in complex mixtures or sensing environments. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.
Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal. It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff, based on their research in single-layer neural networks (ADALINE). Specifically, they used gradient descent to train ADALINE to recognize patterns, and called the algorithm "delta rule". They then applied the rule to filters, resulting in the LMS algorithm.
The stationary wavelet transform (SWT) is a wavelet transform algorithm designed to overcome the lack of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance is achieved by removing the downsamplers and upsamplers in the DWT and upsampling the filter coefficients by a factor of in the th level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. This algorithm is more famously known as "algorithme à trous" in French which refers to inserting zeros in the filters. It was introduced by Holschneider et al.
An adaptive beamformer is a system that performs adaptive spatial signal processing with an array of transmitters or receivers. The signals are combined in a manner which increases the signal strength to/from a chosen direction. Signals to/from other directions are combined in a benign or destructive manner, resulting in degradation of the signal to/from the undesired direction. This technique is used in both radio frequency and acoustic arrays, and provides for directional sensitivity without physically moving an array of receivers or transmitters.
In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption.
Bernard Widrow is a U.S. professor of electrical engineering at Stanford University. He is the co-inventor of the Widrow–Hoff least mean squares filter (LMS) adaptive algorithm with his then doctoral student Ted Hoff. The LMS algorithm led to the ADALINE and MADALINE artificial neural networks and to the backpropagation technique. He made other fundamental contributions to the development of signal processing in the fields of geophysics, adaptive antennas, and adaptive filtering. A summary of his work is.
Adaptive feedback cancellation is a common method of cancelling audio feedback in a variety of electro-acoustic systems such as digital hearing aids. The time varying acoustic feedback leakage paths can only be eliminated with adaptive feedback cancellation. When an electro-acoustic system with an adaptive feedback canceller is presented with a correlated input signal, a recurrent distortion artifact, entrainment is generated. There is a difference between the system identification and feedback cancellation.
Space-time adaptive processing (STAP) is a signal processing technique most commonly used in radar systems. It involves adaptive array processing algorithms to aid in target detection. Radar signal processing benefits from STAP in areas where interference is a problem. Through careful application of STAP, it is possible to achieve order-of-magnitude sensitivity improvements in target detection.
ADALINE is an early single-layer artificial neural network and the name of the physical device that implemented this network. It was developed by professor Bernard Widrow and his doctoral student Ted Hoff at Stanford University in 1960. It is based on the perceptron. It consists of a weight, a bias and a summation function. The weights and biases were implemented by rheostats, and later, memistors.
Precoding is a generalization of beamforming to support multi-stream transmission in multi-antenna wireless communications. In conventional single-stream beamforming, the same signal is emitted from each of the transmit antennas with appropriate weighting such that the signal power is maximized at the receiver output. When the receiver has multiple antennas, single-stream beamforming cannot simultaneously maximize the signal level at all of the receive antennas. In order to maximize the throughput in multiple receive antenna systems, multi-stream transmission is generally required.
Noise-Predictive Maximum-Likelihood (NPML) is a class of digital signal-processing methods suitable for magnetic data storage systems that operate at high linear recording densities. It is used for retrieval of data recorded on magnetic media.
In signal processing, a kernel adaptive filter is a type of nonlinear adaptive filter. An adaptive filter is a filter that adapts its transfer function to changes in signal properties over time by minimizing an error or loss function that characterizes how far the filter deviates from ideal behavior. The adaptation process is based on learning from a sequence of signal samples and is thus an online algorithm. A nonlinear adaptive filter is one in which the transfer function is nonlinear.
A two-dimensional (2D) adaptive filter is very much like a one-dimensional adaptive filter in that it is a linear system whose parameters are adaptively updated throughout the process, according to some optimization approach. The main difference between 1D and 2D adaptive filters is that the former usually take as inputs signals with respect to time, what implies in causality constraints, while the latter handles signals with 2 dimensions, like x-y coordinates in the space domain, which are usually non-causal. Moreover, just like 1D filters, most 2D adaptive filters are digital filters, because of the complex and iterative nature of the algorithms.
A velocity filter removes interfering signals by exploiting the difference between the travelling velocities of desired seismic waveform and undesired interfering signals.