Spectral concentration problem

Last updated
The three leading Slepian sequences for T=1000 and 2WT=6. Note that each higher order sequence has an extra zero crossing. DPSS figure.svg
The three leading Slepian sequences for T=1000 and 2WT=6. Note that each higher order sequence has an extra zero crossing.

The spectral concentration problem in Fourier analysis refers to finding a time sequence of a given length whose discrete Fourier transform is maximally localized on a given frequency interval, as measured by the spectral concentration.

Contents

Spectral concentration

The discrete Fourier transform (DFT) U(f) of a finite series , is defined as

In the following, the sampling interval will be taken as Δt = 1, and hence the frequency interval as f ∈ [-1/2,1/2]. U(f) is a periodic function with a period 1.

For a given frequency W such that 0<W<1/2, the spectral concentration of U(f) on the interval [-W,W] is defined as the ratio of power of U(f) contained in the frequency band [-W,W] to the power of U(f) contained in the entire frequency band [-1/2,1/2]. That is,

It can be shown that U(f) has only isolated zeros and hence (see [1]). Thus, the spectral concentration is strictly less than one, and there is no finite sequence for which the DTFT can be confined to a band [-W,W] and made to vanish outside this band.

Statement of the problem

Among all sequences for a given T and W, is there a sequence for which the spectral concentration is maximum? In other words, is there a sequence for which the sidelobe energy outside a frequency band [-W,W] is minimum?

The answer is yes; such a sequence indeed exists and can be found by optimizing . Thus maximising the power

subject to the constraint that the total power is fixed, say

leads to the following equation satisfied by the optimal sequence :

This is an eigenvalue equation for a symmetric matrix given by

It can be shown that this matrix is positive-definite, hence all the eigenvalues of this matrix lie between 0 and 1. The largest eigenvalue of the above equation corresponds to the largest possible spectral concentration; the corresponding eigenvector is the required optimal sequence . This sequence is called a 0th–order Slepian sequence (also known as a discrete prolate spheroidal sequence, or DPSS), which is a unique taper with maximally suppressed sidelobes.

It turns out that the number of dominant eigenvalues of the matrix M that are close to 1, corresponds to N=2WT called the Shannon number. If the eigenvalues are arranged in decreasing order (i.e., ), then the eigenvector corresponding to is called nth–order Slepian sequence (DPSS) (0≤nN-1). This nth–order taper also offers the best sidelobe suppression and is pairwise orthogonal to the Slepian sequences of previous orders . These lower order Slepian sequences form the basis for spectral estimation by multitaper method.

Not limited to time series, the spectral concentration problem can be reformulated to apply in multiple Cartesian dimensions [1] and on the surface of the sphere by using spherical harmonics, [2] for applications in geophysics and cosmology [3] among others.

See also

Related Research Articles

<span class="mw-page-title-main">Discrete Fourier transform</span> Type of Fourier transform in discrete mathematics

In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.

<span class="mw-page-title-main">Fourier analysis</span> Branch of mathematics

In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Spectral density</span> Relative importance of certain frequencies in a composite signal

In signal processing, the power spectrum of a continuous time signal describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal as analyzed in terms of its frequency content, is called its spectrum.

<span class="mw-page-title-main">Eigenfunction</span> Mathematical function of a linear operator

In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as for some scalar eigenvalue The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.

<span class="mw-page-title-main">Kaiser window</span> Used in finite impulse response filter design and spectral analysis

The Kaiser window, also known as the Kaiser–Bessel window, was developed by James Kaiser at Bell Laboratories. It is a one-parameter family of window functions used in finite impulse response filter design and spectral analysis. The Kaiser window approximates the DPSS window which maximizes the energy concentration in the main lobe but which is difficult to compute.

<span class="mw-page-title-main">Separation of variables</span> Technique for solving differential equations

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values.

<span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model which is both linear and time-invariant

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.

In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process.

In mathematics, Fourier–Bessel series is a particular kind of generalized Fourier series based on Bessel functions.

<span class="mw-page-title-main">Multitaper</span>

In signal processing, multitaper analysis is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum SX of a stationary ergodic finite-variance random process X, given a finite contiguous realization of X as data.

The prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution. Related are the oblate spheroidal wave functions.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

A heat kernel signature (HKS) is a feature descriptor for use in deformable shape analysis and belongs to the group of spectral shape analysis methods. For each point in the shape, HKS defines its feature vector representing the point's local and global geometric properties. Applications include segmentation, classification, structure discovery, shape matching and shape retrieval.

References

  1. Simons, F. J.; Wang, D. V. (2011). "Spatiospectral concentration in the Cartesian plane". Int. J. Geomath. 2: 1–36. doi:10.1007/s13137-011-0016-z..
  2. Simons, F. J.; Dahlen, F. A.; Wieczorek, M. A. (2006). "Spatiospectral Concentration on a Sphere". SIAM Review. 48 (3): 504–536. doi:10.1137/S0036144504445765.
  3. Dahlen, F. A.; Simons, F. J. (2008). "Spectral estimation on a sphere in geophysics and cosmology". Geophysical Journal International. 174 (3): 774–807. doi:10.1111/j.1365-246X.2008.03854.x.