Spectral density

Last updated

The spectral density of a fluorescent light as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows. Fluorescent lighting spectrum peaks labelled.svg
The spectral density of a fluorescent light as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows.
The voice waveform over time (left) has a broad audio power spectrum (right). Voice waveform and spectrum.png
The voice waveform over time (left) has a broad audio power spectrum (right).

In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. [1] According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.

Contents

When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem. [1]

The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.

However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency. [1]

Units

In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in watts per hertz (W/Hz). [2]

When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units of V2 s Hz−1, since energy has units of power multiplied by time (e.g., watt-hour). [3]

In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz. In the analysis of random vibrations, units of g2 Hz−1 are frequently used for the PSD of acceleration, where g denotes the g-force. [4]

Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time.

Definition

Energy spectral density

Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing; [5] that is, the energy of a signal is:

The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. Finite or not, Parseval's theorem [6] (or Plancherel's theorem) gives us an alternate expression for the energy of the signal:

where:

is the value of the Fourier transform of at frequency (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency in the frequency interval .

Therefore, the energy spectral density of is defined as: [6]

 

 

 

 

(Eq.1)

The function and the autocorrelation of form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram).

As a physical example of how one might measure the energy spectral density of a signal, suppose represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance , and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time is equal to , so the total energy is found by integrating with respect to time over the duration of the pulse. To find the value of the energy spectral density at frequency , one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies (, say) near the frequency of interest and then measure the total energy dissipated across the resistor. The value of the energy spectral density at is then estimated to be . In this example, since the power has units of V2 Ω−1, the energy has units of V2 s Ω−1 = J, and hence the estimate of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing by so that the energy spectral density instead has units of V2 Hz−1.

This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values such as a signal sampled at discrete times :

where is the discrete-time Fourier transform of   The sampling interval is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit   But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency)

Power spectral density

The power spectrum of the measured cosmic microwave background radiation temperature anisotropy in terms of the angular scale. The solid line is a theoretical model, for comparison. PowerSpectrumExt.svg
The power spectrum of the measured cosmic microwave background radiation temperature anisotropy in terms of the angular scale. The solid line is a theoretical model, for comparison.

The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed and applied it to the terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by watts.

The average power of a signal over all time is therefore given by the following time average, where the period is centered about some arbitrary time :

However, for the sake of dealing with the math that follows, it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral. As such, we have an alternative representation of the average power, where and is unity within the arbitrary period and zero elsewhere.

Clearly, in cases where the above expression for P is non-zero, the integral must grow without bound as T grows without bound. That is the reason why we cannot use the energy of the signal, which is that diverging integral, in such cases.

In analyzing the frequency content of the signal , one might like to compute the ordinary Fourier transform ; however, for many signals of interest the Fourier transform does not formally exist. [N 1] Regardless, Parseval's theorem tells us that we can re-write the average power as follows.

Then the power spectral density is simply defined as the integrand above. [8] [9]

 

 

 

 

(Eq.2)

From here, due to the convolution theorem, we can also view as the Fourier transform of the time convolution of and , where * represents the complex conjugate. Taking into account that

and making, , we have:

Where the convolution theorem has been used when passing from the 3th to the 4th line.

Now, if we divide the time convolution above by the period and take the limit as , it becomes the autocorrelation function of the non-windowed signal , which is denoted as , provided that is ergodic, which is true in most, but not all, practical cases. [10] .

From here we see, again assuming the ergodicity of , that the power spectral density can be found as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).

 

 

 

 

(Eq.3)

Many authors use this equality to actually define the power spectral density. [11]

The power of the signal in a given frequency band , where , can be calculated by integrating over frequency. Since , an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):

More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than are not sampled, and results at frequencies which are not an integer multiple of are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of evaluated over the specified time window.

Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables . As before, we can consider a window of with the signal sampled at discrete times for a total measurement period .

Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when (and thus ) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval approach infinity (Brown & Hwang). [12]

If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation.

Properties of the power spectral density

Some properties of the PSD include: [13]

  • The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an even function of frequency: .
  • For a continuous stochastic process x(t), the autocorrelation function Rxx(t) can be reconstructed from its power spectrum Sxx(f) by using the inverse Fourier transform
  • Using Parseval's theorem, one can compute the variance (average power) of a process by integrating the power spectrum over all frequency:
  • For a real process x(t) with power spectral density , one can compute the integrated spectrum or power spectral distribution, which specifies the average bandlimited power contained in frequencies from DC to f using: [14]
    Note that the previous expression for total power (signal variance) is a special case where ƒ  ∞.

Cross power spectral density

Given two signals and , each of which possess power spectral densities and , it is possible to define a cross power spectral density (CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal.

Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain

where, again, the contributions of and are already understood. Note that , so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit becomes the Fourier transform of a cross-correlation function. [15]

where is the cross-correlation of with and is the cross-correlation of with . In light of this, the PSD is seen to be a special case of the CSD for . If and are real signals (e.g. voltage or current), their Fourier transforms and are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two.

For discrete signals xn and yn, the relationship between the cross-spectral density and the cross-covariance is

Estimation

The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.

The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.

Applications

Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter.

Electrical engineering

Spectrogram of an FM radio signal with frequency on the horizontal axis and time increasing upwards on the vertical axis. Spectrogram-fm-radio.png
Spectrogram of an FM radio signal with frequency on the horizontal axis and time increasing upwards on the vertical axis.

The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals.

The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.

Cosmology

Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.

Climate Science

Power spectral-analysis have been used to examine the spatial structures for climate research. [21] These results suggests atmospheric turbulence link climate change to more local regional volatility in weather conditions. [22]

See also

Notes

  1. Some authors (e.g. Risken [7] ) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density
    where is the Dirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.

Related Research Articles

<span class="mw-page-title-main">Autocorrelation</span> Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

<span class="mw-page-title-main">Fourier analysis</span> Branch of mathematics

In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes as input a function and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

<span class="mw-page-title-main">Short-time Fourier transform</span> Fourier-related transform suited to signals that change rather quickly in time

The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains simultaneously, using various time–frequency representations. Rather than viewing a 1-dimensional signal and some transform, time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.

Stransform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the S transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in S transform. Moreover, the S transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the S transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function.

<span class="mw-page-title-main">Rectangular function</span> Function whose graph is 0, then 1, then 0 again, in an almost-everywhere continuous way

The rectangular function is defined as

<span class="mw-page-title-main">Optical autocorrelation</span> Autocorrelation functions realized in optics

In optics, various autocorrelation functions can be experimentally realized. The field autocorrelation may be used to calculate the spectrum of a source of light, while the intensity autocorrelation and the interferometric autocorrelation are commonly used to estimate the duration of ultrashort pulses produced by modelocked lasers. The laser pulse duration cannot be easily measured by optoelectronic methods, since the response time of photodiodes and oscilloscopes are at best of the order of 200 femtoseconds, yet laser pulses can be made as short as a few femtoseconds.

<span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

In mathematics, a Dirac comb is a periodic function with the formula

Quantum noise is noise arising from the indeterminate state of matter in accordance with fundamental principles of quantum mechanics, specifically the uncertainty principle and via zero-point energy fluctuations. Quantum noise is due to the apparently discrete nature of the small quantum constituents such as electrons, as well as the discrete nature of quantum effects, such as photocurrents.

In mathematics, the Fourier sine and cosine transforms are forms of the Fourier transform that do not use complex numbers or require negative frequency. They are the forms originally used by Joseph Fourier and are still preferred in some applications, such as signal processing or statistics.

A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.

In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.

<span class="mw-page-title-main">Wigner distribution function</span>

The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.

<span class="mw-page-title-main">Gabor transform</span>

The Gabor transform, named after Dennis Gabor, is a special case of the short-time Fourier transform. It is used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. The function to be transformed is first multiplied by a Gaussian function, which can be regarded as a window function, and the resulting function is then transformed with a Fourier transform to derive the time-frequency analysis. The window function means that the signal near the time being analyzed will have higher weight. The Gabor transform of a signal x(t) is defined by this formula:

In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

A Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms.

Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.

References

  1. 1 2 3 P Stoica & R Moses (2005). "Spectral Analysis of Signals" (PDF).
  2. Gérard Maral (2003). VSAT Networks. John Wiley and Sons. ISBN   978-0-470-86684-9.
  3. Michael Peter Norton & Denis G. Karczub (2003). Fundamentals of Noise and Vibration Analysis for Engineers. Cambridge University Press. ISBN   978-0-521-49913-2.
  4. Alessandro Birolini (2007). Reliability Engineering. Springer. p. 83. ISBN   978-3-540-49388-4.
  5. Oppenheim; Verghese. Signals, Systems, and Inference. pp. 32–4.
  6. 1 2 Stein, Jonathan Y. (2000). Digital Signal Processing: A Computer Science Perspective. Wiley. p. 115.
  7. Hannes Risken (1996). The Fokker–Planck Equation: Methods of Solution and Applications (2nd ed.). Springer. p. 30. ISBN   9783540615309.
  8. Fred Rieke; William Bialek & David Warland (1999). Spikes: Exploring the Neural Code (Computational Neuroscience). MIT Press. ISBN   978-0262681087.
  9. Scott Millers & Donald Childers (2012). Probability and random processes. Academic Press. pp. 370–5.
  10. The Wiener–Khinchin theorem makes sense of this formula for any wide-sense stationary process under weaker hypotheses: does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
  11. Dennis Ward Ricker (2003). Echo Signal Processing. Springer. ISBN   978-1-4020-7395-3.
  12. Robert Grover Brown & Patrick Y.C. Hwang (1997). Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons. ISBN   978-0-471-12839-7.
  13. Von Storch, H.; Zwiers, F. W. (2001). Statistical analysis in climate research. Cambridge University Press. ISBN   978-0-521-01230-0.
  14. An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN   0-87942-235-1
  15. William D Penny (2009). "Signal Processing Course, chapter 7".
  16. Iranmanesh, Saam; Rodriguez-Villegas, Esther (2017). "An Ultralow-Power Sleep Spindle Detection System on Chip". IEEE Transactions on Biomedical Circuits and Systems. 11 (4): 858–866. doi:10.1109/TBCAS.2017.2690908. hdl: 10044/1/46059 . PMID   28541914. S2CID   206608057.
  17. Imtiaz, Syed Anas; Rodriguez-Villegas, Esther (2014). "A Low Computational Cost Algorithm for REM Sleep Detection Using Single Channel EEG". Annals of Biomedical Engineering. 42 (11): 2344–59. doi:10.1007/s10439-014-1085-6. PMC   4204008 . PMID   25113231.
  18. Drummond JC, Brann CA, Perkins DE, Wolfe DE: "A comparison of median frequency, spectral edge frequency, a frequency band power ratio, total power, and dominance shift in the determination of depth of anesthesia," Acta Anaesthesiol. Scand. 1991 Nov;35(8):693-9.
  19. Swartz, Diemo (1998). "Spectral Envelopes". .
  20. Michael Cerna & Audrey F. Harvey (2000). "The Fundamentals of FFT-Based Signal Analysis and Measurement" (PDF).
  21. Communication, N. B. I. (2022-05-23). "Danish astrophysics student discovers link between global warming and locally unstable weather". nbi.ku.dk. Retrieved 2022-07-23.
  22. Sneppen, Albert (2022-05-05). "The power spectrum of climate change". The European Physical Journal Plus. 137 (5): 555. arXiv: 2205.07908 . Bibcode:2022EPJP..137..555S. doi:10.1140/epjp/s13360-022-02773-w. ISSN   2190-5444. S2CID   248652864.