Least-squares spectral analysis

Last updated

The result of fitting a set of data points with a quadratic function Linear least squares2.svg
The result of fitting a set of data points with a quadratic function

Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. [1] [2] Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems. [3] Unlike with Fourier analysis, data need not be equally spaced to use LSSA.

Contents

LSSA is also known as the Vaníček method [4] or the Gauss-Vaniček method [5] after Petr Vaníček, and as the Lomb method [3] or the Lomb–Scargle periodogram, [2] [6] based on the contributions of Nicholas R. Lomb [7] and, independently, Jeffrey D. Scargle. [8]

Historical background

The close connections between Fourier analysis, the periodogram, and least-squares fitting of sinusoids have long been known. [9] Most developments, however, are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, [10] including both a periodogram analysis equivalent to what is now referred to the Lomb method, and least-squares fitting of selected frequencies of sinusoids determined from such periodograms, connected by a procedure that is now known as matching pursuit with post-backfitting [11] or orthogonal matching pursuit. [12]

Petr Vaníček, a Canadian geodesist of the University of New Brunswick, also proposed the matching-pursuit approach, which he called "successive spectral analysis" and the result a "least-squares periodogram", with equally and unequally spaced data, in 1969. [13] He generalized this method to account for systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971. [14]

The Vaníček method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out its close connection to periodogram analysis. [7] The definition of a periodogram of unequally spaced data was subsequently further modified and analyzed by Jeffrey D. Scargle of NASA Ames Research Center, [8] who showed that with minor changes it could be made identical to Lomb's least-squares formula for fitting individual sinusoid frequencies.

Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out in reference to least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent." [8]

Press [3] summarizes the development this way:

A completely different method of spectral analysis for unevenly sampled data, one that mitigates these difficulties and has some other very desirable properties, was developed by Lomb, based in part on earlier work by Barning and Vanicek, and additionally elaborated by Scargle.

In 1989, Michael J. Korenberg of Queen's University in Kingston, Ontario, developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems, [15] similar to the technique that later became known as orthogonal matching pursuit.

Method variants

The Vaníček method

In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies, using a standard linear regression, or least-squares fit. [16] The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known as matching pursuit with pre-backfitting [11] ). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids).

A data vector Φ is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector x:

where the weight vector x is chosen to minimize the sum of squared errors in approximating Φ. The solution for x is closed-form, using standard linear regression: [17]

Here the matrix A can be based on any set of functions that are mutually independent (not necessarily orthogonal) when evaluated at the sample times; for spectral analysis, the functions used are typically sines and cosines evenly distributed over the frequency range of interest. If too many frequencies are chosen in a too-narrow frequency range, the functions will not be sufficiently independent, the matrix will be badly conditioned, and the resulting spectrum will not be meaningful. [17]

When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is a diagonal matrix; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and the sinusoids are chosen to be sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycle per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This particular case is known as the discrete Fourier transform, slightly rewritten in terms of real data and coefficients. [17]

   (DFT case for N equally spaced samples and frequencies, within a scalar factor)

The Lomb method

Trying to lower the computational burden of the Vaníček method in 1976 [7] (no longer an issue), Lomb proposed using the above simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not too closely spaced. This is essentially the traditional periodogram formulation, but now adopted for use with unevenly spaced samples. The vector x is a good estimate of an underlying spectrum, but since correlations are ignored, Ax is no longer a good approximation to the signal, and the method is no longer a least-squares method – yet it has continued to be referred to as such.

Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula to first find a time delay such that this pair of sinusoids would be mutually orthogonal at sample times , and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency, [3] [8] which made his modified periodogram method exactly equivalent to Lomb's method. The time delay is defined by the formula

The periodogram at frequency is then estimated as:

which Scargle reports then has the same statistical distribution as the periodogram in the evenly sampled case. [8]

At any individual frequency , this method gives the same power as does a least-squares fit to sinusoids of that frequency, of the form

[18]

The generalized Lomb–Scargle periodogram

The standard Lomb–Scargle periodogram is valid for a model with zero mean. Commonly, this is approximated by subtracting the mean of the data before calculating the periodogram. However, this is an inaccurate assumption when the mean of the model (the fitted sinusoids) is non-zero. The generalized Lomb–Scargle periodogram removes this assumption, and explicitly solves for the mean. In this case, the function fitted is

[19]

The generalized Lomb–Scargle periodogram has also been referred to as a floating mean periodogram. [20]

Korenberg's "fast orthogonal search" method

Michael Korenberg of Queen's University in Kingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set, such as sinusoidal components for spectral analysis, called fast orthogonal search (FOS). Mathematically, FOS uses a slightly modified Cholesky decomposition in a mean-square error reduction (MSER) process, implemented as a sparse matrix inversion. [15] [21] As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, and can achieve highly accurate identifications of embedded periodicities and excels with unequally spaced data; the fast orthogonal search method has also been applied to other problems such as nonlinear system identification.

Palmer's Chi-squared method

Palmer has developed a method for finding the best-fit function to any chosen number of harmonics, allowing more freedom to find non-sinusoidal harmonic functions. [22] This method is a fast technique (FFT-based) for doing weighted least-squares analysis on arbitrarily spaced data with non-uniform standard errors. Source code that implements this technique is available. [23] Because data are often not sampled at uniformly spaced discrete times, this method "grids" the data by sparsely filling a time series array at the sample times. All intervening grid points receive zero statistical weight, equivalent to having infinite error bars at times between samples.

Applications

The most useful feature of the LSSA method is enabling incomplete records to be spectrally analyzed, without the need to manipulate the record or to invent otherwise non-existent data.

Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. [13] Generally, spectral magnitudes defined in the above manner enable the output's straightforward significance level regime. [24] Alternatively, magnitudes in the Vaníček spectrum can also be expressed in dB. [25] Note that magnitudes in the Vaníček spectrum follow β-distribution. [26]

Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points. [17] No such inverse procedure is known for the periodogram method.

Implementation

The LSSA can be implemented in less than a page of MATLAB code. [27] In essence: [16]

"to compute the least-squares spectrum we must compute m spectral values ... which involves performing the least-squares approximation m times, each time to get [the spectral power] for a different frequency"

I.e., for each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product, as described by Craymer; [17] finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record.

This method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal on the data points; it is Vaníček's original method. In contrast, as Craymer explains, it is also possible to perform a full simultaneous or in-context least-squares fit by solving a matrix equation, partitioning the total data variance between the specified sinusoid frequencies. [17] Such a matrix least-squares solution is natively available in MATLAB as the backslash operator. [28]

Craymer explains that the simultaneous or in-context method, as opposed to the independent or out-of-context version (as well as the periodogram version due to Lomb), cannot fit more components (sines and cosines) than there are data samples, and further that: [17]

"...serious repercussions can also arise if the selected frequencies result in some of the Fourier components (trig functions) becoming nearly linearly dependent with each other, thereby producing an ill-conditioned or near singular N. To avoid such ill conditioning it becomes necessary to either select a different set of frequencies to be estimated (e.g., equally spaced frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and estimate the inverse least squares transform separately for the individual frequencies..."

Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor. [3]

In Fourier analysis, such as the Fourier transform or the discrete Fourier transform, the sinusoids being fitted to the data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus an in-context simultaneous least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies. [29] This method is usually preferred for its efficient fast Fourier transform implementation, when complete data records with equally spaced samples are available.

See also

Related Research Articles

Discrete Fourier transform Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space. There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to be explicitly characterized, and may be extended to related objects. See Operator (physics) for other examples.

Fourier transform Mathematical transform that expresses a function of time as a function of frequency

A Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial frequency or temporal frequency. An example application would be decomposing the waveform of a musical chord into terms of the intensity of its constituent pitches. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.

In mathematics, the discrete sine transform (DST) is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using a purely real matrix. It is equivalent to the imaginary parts of a DFT of roughly twice the length, operating on real data with odd symmetry, where in some variants the input and/or output data are shifted by half a sample.

Window function Function used in signal processing

In signal processing and statistics, a window function is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions.

Sine wave Mathematical curve that describes a smooth repetitive oscillation; continuous wave

A sine wave, sinusoidal wave, or just sinusoid is a mathematical curve defined in terms of the sine trigonometric function, of which it is the graph. It is a type of continuous wave and also a smooth periodic function. It occurs often in mathematics, as well as in physics, engineering, signal processing and many other fields.

In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods. It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms.

In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions and of being its own inverse.

The concept of signed frequency can indicate both the rate and sense of rotation; in can be as simple as a wheel rotating clockwise or counterclockwise. The rate is expressed in units such as revolutions per second (hertz) or radian/second.

Spectrum continuation analysis (SCA) is a generalization of the concept of Fourier series to non-periodic functions of which only a fragment has been sampled in the time domain.

Pronys method

Prony analysis was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows for the estimation of frequency, amplitude, phase and damping components of a signal.

MUSIC (algorithm)

MUSIC is an algorithm used for frequency estimation and radio direction finding.

Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal processing. The magnetic and gravitational fields emanating from the Earth's interior hold essential information concerning seismic activities and the internal structure. Hence, detection and analysis of the electric and Magnetic fields is very crucial. As the Electromagnetic and gravitational waves are multi-dimensional signals, all the 1-D transformation techniques can be extended for the analysis of these signals as well. Hence this article also discusses multi-dimensional signal processing techniques.

Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model.

In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

In statistics, signal processing, and time series analysis, a sinusoidal model is used to approximate a sequence Yi to a sine function:

SigSpec is a statistical technique to provide the reliability of periodicities in a measured time series. It relies on the amplitude spectrum obtained by the Discrete Fourier transform (DFT) and assigns a quantity called the spectral significance to each amplitude. This quantity is a logarithmic measure of the probability that the given amplitude level would be seen in white noise, in the sense of a type I error. It represents the answer to the question, “What would be the chance to obtain an amplitude like the measured one or higher, if the analysed time series were random?”

The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform.
The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.
The simplest way to derive the spectrum of a chirp, now that computers are widely available, is to sample the time-domain waveform at a frequency well above the Nyquist limit and call up an FFT algorithm to obtain the desired result. As this approach was not an option for the early designers, they resorted to analytic analysis, where possible, or to graphical or approximation methods, otherwise. These early methods still remain helpful, however, as they give additional insight into the behavior and properties of chirps.

Frequency selective surface

A frequency-selective surface (FSS) is any thin, repetitive surface designed to reflect, transmit or absorb electromagnetic fields based on the frequency of the field. In this sense, an FSS is a type of optical filter or metal-mesh optical filters in which the filtering is accomplished by virtue of the regular, periodic pattern on the surface of the FSS. Though not explicitly mentioned in the name, FSS's also have properties which vary with incidence angle and polarization as well - these are unavoidable consequences of the way in which FSS's are constructed. Frequency-selective surfaces have been most commonly used in the radio frequency region of the electromagnetic spectrum and find use in applications as diverse as the aforementioned microwave oven, antenna radomes and modern metamaterials. Sometimes frequency selective surfaces are referred to simply as periodic surfaces and are a 2-dimensional analog of the new periodic volumes known as photonic crystals.

References

  1. Cafer Ibanoglu (2000). Variable Stars As Essential Astrophysical Tools. Springer. ISBN   0-7923-6084-2.
  2. 1 2 D. Scott Birney; David Oesper; Guillermo Gonzalez (2006). Observational Astronomy. Cambridge University Press. ISBN   0-521-85370-2.
  3. 1 2 3 4 5 Press (2007). Numerical Recipes (3rd ed.). Cambridge University Press. ISBN   978-0-521-88068-8.
  4. J. Taylor; S. Hamilton (20 March 1972). "Some tests of the Vaníček Method of spectral analysis". Astrophysics and Space Science. 17 (2): 357–367. Bibcode:1972Ap&SS..17..357T. doi:10.1007/BF00642907. S2CID   123569059.
  5. M. Omerbashich (26 June 2006). "Gauss-Vanicek spectral analysis of the Sepkoski compendium: no new life cycles". Computing in Science & Engineering. 8 (4): 26–30. arXiv: math-ph/0608014 . Bibcode:2006CSE.....8d..26O. doi:10.1109/MCSE.2006.68.
  6. Hans P. A. Van Dongen (1999). "Searching for Biological Rhythms: Peak Detection in the Periodogram of Unequally Spaced Data". Journal of Biological Rhythms. 14 (6): 617–620. doi:10.1177/074873099129000984. PMID   10643760. S2CID   14886901.
  7. 1 2 3 Lomb, N. R. (1976). "Least-squares frequency analysis of unequally spaced data". Astrophysics and Space Science. 39 (2): 447–462. Bibcode:1976Ap&SS..39..447L. doi:10.1007/BF00648343. S2CID   2671466.
  8. 1 2 3 4 5 Scargle, J. D. (1982). "Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data". Astrophysical Journal. 263: 835. Bibcode:1982ApJ...263..835S. doi:10.1086/160554.
  9. David Brunt (1931). The Combination of Observations (2nd ed.). Cambridge University Press.
  10. Barning, F. J. M. (1963). "The numerical analysis of the light-curve of 12 Lacertae". Bulletin of the Astronomical Institutes of the Netherlands. 17: 22. Bibcode:1963BAN....17...22B.
  11. 1 2 Pascal Vincent; Yoshua Bengio (2002). "Kernel Matching Pursuit" (PDF). Machine Learning. 48: 165–187. doi: 10.1023/A:1013955821559 .
  12. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, "Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition," in Proc. 27th Asilomar Conference on Signals, Systems and Computers, A. Singh, ed., Los Alamitos, CA, USA, IEEE Computer Society Press, 1993
  13. 1 2 Vaníček, P. (1969). "Approximate spectral analysis by least-squares fit". Astrophysics and Space Science. 4 (4): 387–391. Bibcode:1969Ap&SS...4..387V. doi:10.1007/BF00651344. S2CID   124921449.
  14. Vaníček, P. (1971). "Further development and properties of the spectral analysis by least-squares". Astrophysics and Space Science. 12 (1): 10–33. Bibcode:1971Ap&SS..12...10V. doi:10.1007/BF00656134. S2CID   109404359.
  15. 1 2 Korenberg, M. J. (1989). "A robust orthogonal algorithm for system identification and time-series analysis". Biological Cybernetics. 60 (4): 267–276. doi:10.1007/BF00204124. PMID   2706281. S2CID   11712196.
  16. 1 2 Wells, D.E., P. Vaníček, S. Pagiatakis, 1985. Least-squares spectral analysis revisited. Department of Surveying Engineering Technical Report 84, University of New Brunswick, Fredericton, 68 pages, Available at .
  17. 1 2 3 4 5 6 7 Craymer, M.R., The Least Squares Spectrum, Its Inverse Transform and Autocorrelation Function: Theory and Some Applications in Geodesy, Ph.D. Dissertation, University of Toronto, Canada (1998).
  18. William J. Emery; Richard E. Thomson (2001). Data Analysis Methods in Physical Oceanography. Elsevier. ISBN   0-444-50756-6.
  19. M. Zechmeister; M. Kürster (March 2009). "The generalised Lomb–Scargle periodogram. A new formalism for the floating-mean and Keplerian periodograms". Astronomy & Astrophysics. 496 (2): 577–584. arXiv: 0901.2573 . Bibcode:2009A&A...496..577Z. doi:10.1051/0004-6361:200811296. S2CID   10408194.
  20. Andrew Cumming; Geoffrey W. Marcy; R. Paul Butler (December 1999). "The Lick Planet Search: Detectability and Mass Thresholds". The Astrophysical Journal. 526 (2): 890–915. arXiv: astro-ph/9906466 . Bibcode:1999ApJ...526..890C. doi:10.1086/308020. S2CID   12560512.
  21. Korenberg, Michael J.; Brenan, Colin J. H.; Hunter, Ian W. (1997). "Raman Spectral Estimation via Fast Orthogonal Search". The Analyst. 122 (9): 879–882. Bibcode:1997Ana...122..879K. doi:10.1039/a700902j.
  22. Palmer, David M. (2009). "A Fast Chi-squared Technique For Period Search of Irregularly Sampled Data". The Astrophysical Journal. 695 (1): 496–502. arXiv: 0901.1913 . Bibcode:2009ApJ...695..496P. doi:10.1088/0004-637X/695/1/496. S2CID   5991300.
  23. "David Palmer: The Fast Chi-squared Period Search".
  24. Beard, A.G., Williams, P.J.S., Mitchell, N.J. & Muller, H.G. A special climatology of planetary waves and tidal variability, J Atm. Solar-Ter. Phys. 63 (09), p.801–811 (2001).
  25. Pagiatakis, S. Stochastic significance of peaks in the least-squares spectrum, J of Geodesy 73, p.67-78 (1999).
  26. Steeves, R.R. A statistical test for significance of peaks in the least squares spectrum, Collected Papers of the Geodetic Survey, Department of Energy, Mines and Resources, Surveys and Mapping, Ottawa, Canada, p.149-166 (1981)
  27. Richard A. Muller; Gordon J. MacDonald (2000). Ice Ages and Astronomical Causes: Data, spectral analysis and mechanisms (1st ed.). Springer Berlin Heidelberg. Bibcode:2000iaac.book.....M. ISBN   978-3-540-43779-6. OL   20645181M. Wikidata   Q111312009.
  28. Timothy A. Davis; Kermit Sigmon (2005). MATLAB Primer. CRC Press. ISBN   1-58488-523-8.
  29. Darrell Williamson (1999). Discrete-Time Signal Processing: An Algebraic Approach. Springer. ISBN   1-85233-161-5.