Reassignment method

Last updated
Reassigned spectral surface for the onset of an acoustic bass tone having a sharp pluck and a fundamental frequency of approximately 73.4 Hz. Sharp spectral ridges representing the harmonics are evident, as is the abrupt onset of the tone. The spectrogram was computed using a 65.7 ms Kaiser window with a shaping parameter of 12. Reassigned spectrogral surface of bass pluck.png
Reassigned spectral surface for the onset of an acoustic bass tone having a sharp pluck and a fundamental frequency of approximately 73.4 Hz. Sharp spectral ridges representing the harmonics are evident, as is the abrupt onset of the tone. The spectrogram was computed using a 65.7 ms Kaiser window with a shaping parameter of 12.

The method of reassignment is a technique for sharpening a time-frequency representation (e.g. spectrogram or the short-time Fourier transform) by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The method has been independently introduced by several parties under various names, including method of reassignment, remapping, time-frequency reassignment, and modified moving-window method. [1] The method of reassignment sharpens blurry time-frequency data by relocating the data according to local estimates of instantaneous frequency and group delay. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time and frequency with respect to the analysis window.

Contents

Introduction

Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time.

In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. [2]

The spectrogram as a time-frequency representation

One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis. [2]

As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other. [2]

A time-frequency representation having improved resolution, relative to the spectrogram, is the Wigner–Ville distribution, which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The WignerVille distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals. [2]

Cohen's class of bilinear time-frequency representations is a class of "smoothed" WignerVille distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true WignerVille distribution shows no energy. [2]

The spectrogram is a member of Cohen's class. It is a smoothed WignerVille distribution with the smoothing kernel equal to the WignerVille distribution of the analysis window. The method of reassignment smooths the WignerVille distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class. [2] [3] In the case of the reassigned spectrogram, the short-time phase spectrum is used to correct the nominal time and frequency coordinates of the spectral data, and map it back nearer to the true regions of support of the analyzed signal.

The method of reassignment

Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of Modified Moving Window Method. [4] Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal. [4] :67

In the classical moving window method, a time-domain signal, is decomposed into a set of coefficients, , based on a set of elementary signals, , defined [4] :73

where is a (real-valued) lowpass kernel function, like the window function in the short-time Fourier transform. The coefficients in this decomposition are defined

where is the magnitude, and the phase, of , the Fourier transform of the signal shifted in time by and windowed by . [5] :4

can be reconstructed from the moving window coefficients by [5] :8

For signals having magnitude spectra, , whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point satisfying the phase stationarity condition [4] :74

or equivalently, around the point defined by [4] :74

This phenomenon is known in such fields as optics as the principle of stationary phase, which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid. [4] :73

In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency. [4] :71

The time-frequency coordinates thus computed are equal to the local group delay, and local instantaneous frequency, and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are local in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis. [4] :70

The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of to this point of maximum contribution , rather than to the point at which it is computed. This point is sometimes called the center of gravity of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued. [2]

Efficient computation of reassigned times and frequencies

In digital signal processing, it is most common to sample the time and frequency domains. The discrete Fourier transform is used to compute samples of the Fourier transform from samples of a time domain signal. The reassignment operations proposed by Kodera et al. cannot be applied directly to the discrete short-time Fourier transform data, because partial derivatives cannot be computed directly on data that is discrete in time and frequency, and it has been suggested that this difficulty has been the primary barrier to wider use of the method of reassignment.

It is possible to approximate the partial derivatives using finite differences. For example, the phase spectrum can be evaluated at two nearby times, and the partial derivative with respect to time be approximated as the difference between the two values divided by the time difference, as in

For sufficiently small values of and and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function.

Independently of Kodera et al., Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase spectrum. [6] It is easily shown that Nelson's cross spectral surfaces compute an approximation of the derivatives that is equivalent to the finite differences method.

Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of Cohen's class of time-frequency representations by generalizing the reassignment operations to

where is the WignerVille distribution of , and is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of phase. [2]

In the case of the spectrogram, the reassignment operations can be computed by

where is the short-time Fourier transform computed using an analysis window is the short-time Fourier transform computed using a time-weighted analysis window and is the short-time Fourier transform computed using a time-derivative analysis window .

Using the auxiliary window functions and , the reassignment operations can be computed at any time-frequency coordinate from an algebraic combination of three Fourier transforms evaluated at . Since these algorithms operate only on short-time spectral data evaluated at a single time and frequency, and do not explicitly compute any derivatives, this gives an efficient method of computing the reassigned discrete short-time Fourier transform.

One constraint in this method of computation is that the must be non-zero. This is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued.

Separability

The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a multi-component signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution. [7]

For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components,

and the instantaneous frequency of each component is defined as the derivative of its phase with respect to time, that is,

then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter.

This is the property, in the frequency domain, that Nelson called separability [6] and is required of all signals so analyzed. If this property is not met, then the desired multi-component decomposition cannot be achieved, because the parameters of individual components cannot be estimated from the short-time Fourier transform. In such cases, a different analysis window must be chosen so that the separability criterion is satisfied.

If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the is equal to the derivative with respect to time, of the phase of the dominant component at Therefore, if a component, having instantaneous frequency is the dominant component in the vicinity of then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at That is,


Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time domain, and is equivalent to requiring that the time between two events be greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in

In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window). [6] :2585

Extensions

Consensus complex reassignmet

Gardner and Magnasco (2006) argues that the auditory nerves may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement. [8]

Synchrosqueezing transform

[9]

Related Research Articles

In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.

In signal processing, group delay and phase delay are two related ways of describing how a signal's frequency components are delayed in time when passing through a linear time-invariant (LTI) system. Phase delay describes the time shift of a sinusoidal component. Group delay describes the time shift of the envelope of a wave packet, a "pack" or "group" of oscillations centered around one frequency that travel together, formed for instance by multiplying a sine wave by an envelope.

<span class="mw-page-title-main">Chirp</span> Frequency swept signal

A chirp is a signal in which the frequency increases (up-chirp) or decreases (down-chirp) with time. In some sources, the term chirp is used interchangeably with sweep signal. It is commonly applied to sonar, radar, and laser systems, and to other applications, such as in spread-spectrum communications. This signal type is biologically inspired and occurs as a phenomenon due to dispersion. It is usually compensated for by using a matched filter, which can be part of the propagation channel. Depending on the specific performance measure, however, there are better techniques both for radar and communication. Since it was used in radar and space, it has been adopted also for communication standards. For automotive radar applications, it is usually called linear frequency modulated waveform (LFMW).

<span class="mw-page-title-main">Spectral density</span> Relative importance of certain frequencies in a composite signal

In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.

A resistor–capacitor circuit, or RC filter or RC network, is an electric circuit composed of resistors and capacitors. It may be driven by a voltage or current source and these will produce different responses. A first order RC circuit is composed of one resistor and one capacitor and is the simplest type of RC circuit.

<span class="mw-page-title-main">Short-time Fourier transform</span> Fourier-related transform suited to signals that change rather quickly in time

The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

<span class="mw-page-title-main">Spectral phase interferometry for direct electric-field reconstruction</span>

In ultrafast optics, spectral phase interferometry for direct electric-field reconstruction (SPIDER) is an ultrashort pulse measurement technique originally developed by Chris Iaconis and Ian Walmsley.

<span class="mw-page-title-main">Gabor transform</span>

The Gabor transform, named after Dennis Gabor, is a special case of the short-time Fourier transform. It is used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. The function to be transformed is first multiplied by a Gaussian function, which can be regarded as a window function, and the resulting function is then transformed with a Fourier transform to derive the time-frequency analysis. The window function means that the signal near the time being analyzed will have higher weight. The Gabor transform of a signal x(t) is defined by this formula:

In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

<span class="mw-page-title-main">Least-squares spectral analysis</span> Periodicity computation method

Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA.

Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.

In the field of time–frequency analysis, several signal formulations are used to represent the signal in a joint time–frequency domain.

Filtering in the context of large eddy simulation (LES) is a mathematical operation intended to remove a range of small scales from the solution to the Navier-Stokes equations. Because the principal difficulty in simulating turbulent flows comes from the wide range of length and time scales, this operation makes turbulent flow simulation cheaper by reducing the range of scales that must be resolved. The LES filter operation is low-pass, meaning it filters out the scales associated with high frequencies.

Time–frequency analysis for music signals is one of the applications of time–frequency analysis. Musical sound can be more complicated than human vocal sound, occupying a wider band of frequency. Music signals are time-varying signals; while the classic Fourier transform is not sufficient to analyze them, time–frequency analysis is an efficient tool for such use. Time–frequency analysis is extended from the classic Fourier approach. Short-time Fourier transform (STFT), Gabor transform (GT) and Wigner distribution function (WDF) are famous time–frequency methods, useful for analyzing music signals such as notes played on a piano, a flute or a guitar.

<span class="mw-page-title-main">Phasor approach to fluorescence lifetime and spectral imaging</span>

Phasor approach refers to a method which is used for vectorial representation of sinusoidal waves like alternative currents and voltages or electromagnetic waves. The amplitude and the phase of the waveform is transformed into a vector where the phase is translated to the angle between the phasor vector and X axis and the amplitude is translated to vector length or magnitude. In this concept the representation and the analysis becomes very simple and the addition of two wave forms is realized by their vectorial summation.

Steered-response power (SRP) is a family of acoustic source localization algorithms that can be interpreted as a beamforming-based approach that searches for the candidate position or direction that maximizes the output of a steered delay-and-sum beamformer.

Spectral interferometry (SI) or frequency-domain interferometry is a linear technique used to measure optical pulses, with the condition that a reference pulse that was previously characterized is available. This technique provides information about the intensity and phase of the pulses. SI was first proposed by Claude Froehly and coworkers in the 1970s.

References

  1. Hainsworth, Stephen (2003). "Chapter 3: Reassignment methods". Techniques for the Automated Analysis of Musical Audio (PhD). University of Cambridge. CiteSeerX   10.1.1.5.9579 .
  2. 1 2 3 4 5 6 7 8 F. Auger & P. Flandrin (May 1995). "Improving the readability of time-frequency and time-scale representations by the reassignment method". IEEE Transactions on Signal Processing. 43 (5): 1068–1089. Bibcode:1995ITSP...43.1068A. CiteSeerX   10.1.1.646.794 . doi:10.1109/78.382394. S2CID   6336685.
  3. P. Flandrin, F. Auger, and E. Chassande-Mottin, Time-frequency reassignment: From principles to algorithms, in Applications in Time-Frequency Signal Processing (A. Papandreou-Suppappola, ed.), ch. 5, pp. 179 – 203, CRC Press, 2003.
  4. 1 2 3 4 5 6 7 8 K. Kodera; R. Gendrin & C. de Villedary (Feb 1978). "Analysis of time-varying signals with small BT values". IEEE Transactions on Acoustics, Speech, and Signal Processing. 26 (1): 64–76. doi:10.1109/TASSP.1978.1163047.
  5. 1 2 Fitz, Kelly R.; Fulop, Sean A. (2009), A Unified Theory of Time-Frequency Reassignment, doi:10.48550/arXiv.0903.3080 this preprint manuscript is written by a previous contributor to this Wikipedia article; see their contribution.
  6. 1 2 3 D. J. Nelson (Nov 2001). "Cross-spectral methods for processing speech". Journal of the Acoustical Society of America. 110 (5): 2575–2592. Bibcode:2001ASAJ..110.2575N. doi:10.1121/1.1402616. PMID   11757947.
  7. K. Fitz, L. Haken, On the use of time-frequency reassignment in additve sound modeling, Journal of the Audio Engineering Society 50 (11) (2002) 879 – 893.
  8. Gardner, Timothy J.; Magnasco, Marcelo O. (18 April 2006). "Sparse time-frequency representations". Proceedings of the National Academy of Sciences. 103 (16): 6094–6099. doi:10.1073/pnas.0601707103.
  9. Meignen, Sylvain; Oberlin, Thomas; Pham, Duong-Hung (July 2019). "Synchrosqueezing transforms: From low- to high-frequency modulations and perspectives". Comptes Rendus Physique. 20 (5): 449–460. doi:10.1016/j.crhy.2019.07.001.

Further reading