In the field of digital signal processing, the **sampling theorem** is a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of *samples* to capture all the information from a continuous-time signal of finite bandwidth.

- Introduction
- Aliasing
- Derivation as a special case of Poisson summation
- Shannon's original proof
- Notes
- Application to multivariable signals and images
- Critical frequency
- Sampling of non-baseband signals
- Nonuniform sampling
- Sampling below the Nyquist rate under additional restrictions
- Historical background
- Other discoverers
- Why Nyquist?
- See also
- Notes 2
- References
- Further reading
- External links

Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are bandlimited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples.

Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known. (See § Sampling of non-baseband signals below and compressed sensing.) In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.^{ [1] }

The name *Nyquist–Shannon sampling theorem* honours Harry Nyquist and Claude Shannon although it had already been discovered in 1933 by Vladimir Kotelnikov. The theorem was also discovered independently by E. T. Whittaker and by others. It is thus also known by the names *Nyquist–Shannon–Kotelnikov*, *Whittaker–Shannon–Kotelnikov*, *Whittaker–Nyquist–Kotelnikov–Shannon*, and *cardinal theorem of interpolation*.

Sampling is a process of converting a signal (for example, a function of continuous time and/or space) into a sequence of values (a function of discrete time and/or space). Shannon's version of the theorem states:^{ [2] }

If a function contains no frequencies higher than

Bhertz, it is completely determined by giving its ordinates at a series of points spaced seconds apart.

A sufficient sample-rate is therefore anything larger than samples per second. Equivalently, for a given sample rate , perfect reconstruction is guaranteed possible for a bandlimit .

When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing. Modern statements of the theorem are sometimes careful to explicitly state that must contain no sinusoidal component at exactly frequency *B*, or that *B* must be strictly less than ½ the sample rate. The threshold is called the ** Nyquist rate ** and is an attribute of the continuous-time input to be sampled. The sample rate must exceed the Nyquist rate for the samples to suffice to represent *x*(*t*). The threshold *f*_{s}/2 is called the ** Nyquist frequency ** and is an attribute of the sampling equipment. All meaningful frequency components of the properly sampled *x*(*t*) exist below the Nyquist frequency. The condition described by these inequalities is called the **Nyquist criterion**, or sometimes the *Raabe condition*. The theorem is also applicable to functions of other domains, such as *space,* in the case of a digitized image. The only change, in the case of other domains, is the units of measure applied to *t*, *f*_{s}, and *B*.

The symbol *T* = 1/*f*_{s} is customarily used to represent the interval between samples and is called the **sample period** or **sampling interval**. And the samples of function *x*(*t*) are commonly denoted by *x*[*n*] = *x*(*nT*) (alternatively "*x _{n}*" in older signal processing literature), for all integer values of

Practical digital-to-analog converters produce neither scaled and delayed sinc functions, nor ideal Dirac pulses. Instead they produce a piecewise-constant sequence of scaled and delayed rectangular pulses (the zero-order hold), usually followed by a lowpass filter (called an *"anti-imaging filter"*) to remove spurious high-frequency replicas (images) of the original baseband signal.

When is a function with a Fourier transform **:**

the Poisson summation formula indicates that the samples, , of are sufficient to create a periodic summation of . The result is**:**

**(Eq.1)**

which is a periodic function and its equivalent representation as a Fourier series, whose coefficients are This function is also known as the discrete-time Fourier transform (DTFT) of the sample sequence.

As depicted, copies of are shifted by multiples of and combined by addition. For a band-limited function and sufficiently large it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous Any frequency component above is indistinguishable from a lower-frequency component, called an *alias*, associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard), is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is a lowpass filter, and in this application it is called an anti-aliasing filter.

When there is no overlap of the copies (also known as "images") of , the term of ** Eq.1 ** can be recovered by the product**:**

- where
**:**

The sampling theorem is proved since uniquely determines

All that remains is to derive the formula for reconstruction. need not be precisely defined in the region because is zero in that region. However, the worst case is when the Nyquist frequency. A function that is sufficient for that and all less severe cases is**:**

where rect(•) is the rectangular function. Therefore**:**

- (from
**Eq.1**, above). -
^{ [upper-alpha 1] }

- (from

The inverse transform of both sides produces the Whittaker–Shannon interpolation formula **:**

which shows how the samples, can be combined to reconstruct

- Larger-than-necessary values of
*f*(smaller values of_{s}*T*), called*oversampling*, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a*transition band*in which*H*(*f*) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. - Theoretically, the interpolation formula can be implemented as a low pass filter, whose impulse response is sinc(
*t*/*T*) and whose input is which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error.

Poisson shows that the Fourier series in ** Eq.1 ** produces the periodic summation of , regardless of and . Shannon, however, only derives the series coefficients for the case . Virtually quoting Shannon's original paper**:**

- Let be the spectrum of Then

- because is assumed to be zero outside the band If we let where is any positive or negative integer, we obtain:

**(Eq.2)**

- On the left are values of at the sampling points. The integral on the right will be recognized as essentially
^{ [lower-alpha 1] }the*n*^{th}coefficient in a Fourier-series expansion of the function taking the interval to as a fundamental period. This means that the values of the samples determine the Fourier coefficients in the series expansion of Thus they determine since is zero for frequencies greater than*B*, and for lower frequencies is determined if its Fourier coefficients are determined. But determines the original function completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function completely.

Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but these would have been^{[ weasel words ]} familiar to engineers reading his works at the time, since the Fourier pair relationship between rect (the rectangular function) and sinc was well known.

- Let be the
*n*^{th}sample. Then the function is represented by:

As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes.

The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column.

Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, or *RGB* for short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain.

Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor.

Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results.

The sampling theorem applies to camera systems, where the scene and lens constitute an analog spatial signal source, and the image sensor is a spatial sampling device. Each of these components is characterized by a modulation transfer function (MTF), representing the precise resolution (spatial bandwidth) available in that component. Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched. When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor, the under sampling acts as a low-pass filter to reduce or eliminate aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient spatial anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) may be included in a camera system to reduce the MTF of the optical image. Instead of requiring an optical filter, the graphics processing unit of smartphone cameras performs digital signal processing to remove aliasing with a digital filter. Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies, which otherwise falls off rapidly at diffraction limits.

The sampling theorem also applies to post-processing digital images, such as to up or down sampling. Effects of aliasing, blurring, and sharpening may be adjusted with digital filtering implemented in software, which necessarily follows the theoretical principles.

To illustrate the necessity of , consider the family of sinusoids generated by different values of in this formula**:**

With or equivalently , the samples are given by**:**

*regardless of the value of *. That sort of ambiguity is the reason for the *strict* inequality of the sampling theorem's condition.

As discussed by Shannon:^{ [2] }

A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to single-sideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained from sin(

*x*)/*x*by single-side-band modulation.

That is, a sufficient no-loss condition for sampling signals that do not have baseband components exists that involves the *width* of the non-zero frequency interval as opposed to its highest frequency component. See * Sampling (signal processing) * for more details and examples.

For example, in order to sample the FM radio signals in the frequency range of 100–102 MHz, it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval).

A bandpass condition is that *X*(*f*) = 0, for all nonnegative *f* outside the open band of frequencies:

for some nonnegative integer *N*. This formulation includes the normal baseband condition as the case *N*=0.

The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses:

Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost.

The sampling theory of Shannon can be generalized for the case of nonuniform sampling, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition.^{ [3] } Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction.

The general theory for non-baseband and nonuniform samples was developed in 1967 by Henry Landau.^{ [4] } He proved that the average sampling rate (uniform or otherwise) must be twice the *occupied* bandwidth of the signal, assuming it is *a priori* known what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals of when the amount of occupied bandwidth was known, but the actual occupied portion of the spectrum was unknown.^{ [5] } In the 2000s, a complete theory was developed (see the section Sampling below the Nyquist rate under additional restrictions below) using compressed sensing. In particular, the theory, using signal processing language, is described in this 2009 paper.^{ [6] } They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum. Note that minimum sampling requirements do not necessarily guarantee stability.

The Nyquist–Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the Whittaker–Shannon interpolation formula, the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition.

A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low over-all bandwidth (say, the *effective* bandwidth *EB*), but the frequency locations are unknown, rather than all together in a single band, so that the passband technique does not apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus 2*B*. Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than 2*EB*. With this approach, reconstruction is no longer given by a formula, but instead by the solution to a linear optimization program.

Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimal lossy compression.^{ [7] } This setting is relevant in cases where the joint effect of sampling and quantization is to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a random signal. For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimal quantization.^{ [8] }

The sampling theorem was implied by the work of Harry Nyquist in 1928,^{ [9] } in which he showed that up to 2*B* independent pulse samples could be sent through a system of bandwidth *B*; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result^{ [10] } and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step-response sine integral; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a *Küpfmüller filter* (but seldom so in English).

The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon.^{ [2] } V. A. Kotelnikov published similar results in 1933,^{ [11] } as did the mathematician E. T. Whittaker in 1915,^{ [12] } J. M. Whittaker in 1935,^{ [13] } and Gabor in 1946 ("Theory of communication"). In 1999, the Eduard Rhein Foundation awarded Kotelnikov their Basic Research Award "for the first theoretically exact formulation of the sampling theorem".

In 1948 and 1949, Claude E. Shannon published - 16 years after Vladimir Kotelnikov - the two revolutionary articles in which he founded the information theory.^{ [14] }^{ [15] }^{ [2] } In Shannon 1948 the sampling theorem is formulated as “Theorem 13”: Let *f*(*t*) contain no frequencies over W. Then

- where .

It was not until these articles were published that the theorem known as “Shannon’s sampling theorem” became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art.^{ [upper-alpha 2] } A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature of communication theory".

Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example, by Jerri^{ [16] } and by Lüke.^{ [17] } For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term *Raabe condition* came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering^{ [18] } mentions several other discoverers and names in a paragraph and pair of footnotes:

As pointed out by Higgins [135], the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351, 353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel [25].

^{27}As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link [135]. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov [173]. In more implicit, verbal form, it had also been described in the German literature by Raabe [257]. Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it independently of Shannon around the same time.^{28}

^{27}Several authors, following Black [16], have claimed that this first part of the sampling theorem was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of Cauchy does not contain such a statement, as has been pointed out by Higgins [135].

^{28}As a consequence of the discovery of the several independent introductions of the sampling theorem, people started to refer to the theorem by including the names of the aforementioned authors, resulting in such catchphrases as “the Whittaker–Kotel’nikov–Shannon (WKS) sampling theorem" [155] or even "the Whittaker–Kotel'nikov–Raabe–Shannon–Someya sampling theorem" [33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem, "rather than trying to find a title that does justice to all claimants" [136].

Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem remains obscure. The term *Nyquist Sampling Theorem* (capitalized thus) appeared as early as 1959 in a book from his former employer, Bell Labs,^{ [19] } and appeared again in 1963,^{ [20] } and not capitalized in 1965.^{ [21] } It had been called the *Shannon Sampling Theorem* as early as 1954,^{ [22] } but also just *the sampling theorem* by several other books in the early 1950s.

In 1958, Blackman and Tukey cited Nyquist's 1928 article as a reference for *the sampling theorem of information theory*,^{ [23] } even though that article does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries:

- Sampling theorem (of information theory)
- Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See
*Cardinal theorem*.) - Cardinal theorem (of interpolation theory)
- A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band-limited function with the aid of the function

Exactly what "Nyquist's result" they are referring to remains mysterious.

When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering,^{ [18] } "he referred to the critical sampling interval as the *Nyquist interval* corresponding to the band *W*, in recognition of Nyquist’s discovery of the fundamental importance of this interval in connection with telegraphy". This explains Nyquist's name on the critical interval, but not on the theorem.

Similarly, Nyquist's name was attached to * Nyquist rate * in 1953 by Harold S. Black:

"If the essential frequency range is limited to

Bcycles per second, 2Bwas given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to assignaling at the Nyquist rateand has been termed aNyquist interval."^{ [24] }(bold added for emphasis; italics as in the original)

According to the OED, this may be the origin of the term *Nyquist rate*. In Black's usage, it is not a sampling rate, but a signaling rate.

- 44,100 Hz, a customary rate used to sample audible frequencies is based on the limits of human hearing and the sampling theorem
- Balian–Low theorem, a similar theoretical lower bound on sampling rates, but which applies to time–frequency transforms
- Cheung–Marks theorem, which specifies conditions where restoration of a signal by the sampling theorem can become ill-posed
- Hartley's law
- Nyquist ISI criterion
- Reconstruction from zero crossings
- Zero-order hold

- ↑ The sinc function follows from rows 202 and 102 of the transform tables
- ↑ Shannon 1949, p. 448.

The **Whittaker–Shannon interpolation formula** or **sinc interpolation** is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called **Shannon's interpolation formula** and **Whittaker's interpolation formula**. E. T. Whittaker, who published it in 1915, called it the **Cardinal series**.

In signal processing, a **sinc filter** is an idealized filter that removes all frequency components above a given cutoff frequency, without affecting lower frequencies, and has linear phase response. The filter's impulse response is a sinc function in the time domain, and its frequency response is a rectangular function.

In signal processing, **undersampling** or **bandpass sampling** is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate, but is still able to reconstruct the signal.

In mathematics, the **Gibbs phenomenon,** discovered by Henry Wilbraham (1848) and rediscovered by J. Willard Gibbs (1899), is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity. The *n*th partial sum of the Fourier series has large oscillations near the jump, which might increase the maximum of the partial sum above that of the function itself. The overshoot does not die out as *n* increases, but approaches a finite limit. This sort of behavior was also observed by experimental physicists, but was believed to be due to imperfections in the measuring apparatuses.

The **Short-time Fourier transform** (**STFT**), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot.

In signal processing, a **finite impulse response** (**FIR**) **filter** is a filter whose impulse response is of *finite* duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

**Bandlimiting** is the limiting of a signal's frequency domain representation or spectral density to zero above a certain finite frequency.

In mathematics and in signal processing, the **Hilbert transform** is a specific linear operator that takes a function, *u*(*t*) of a real variable and produces another function of a real variable *H*(*u*)(*t*). This linear operator is given by convolution with the function :

In mathematics, physics and engineering, the **sinc function**, denoted by sinc(*x*), has two slightly different definitions.

In mathematics, **Parseval's theorem** usually refers to the result that the Fourier transform is unitary; loosely, that the sum of the square of a function is equal to the sum of the square of its transform. It originates from a 1799 theorem about series by Marc-Antoine Parseval, which was later applied to the Fourier series. It is also known as **Rayleigh's energy theorem**, or **Rayleigh's identity**, after John William Strutt, Lord Rayleigh.

In mathematics, the **discrete-time Fourier transform** (**DTFT**) is a form of Fourier analysis that is applicable to a sequence of values.

The **rectangular function** is defined as

In mathematics, a **Dirac comb** is a periodic tempered distribution constructed from Dirac delta functions

In digital signal processing, **downsampling**, **compression**, and **decimation** are terms associated with the process of *resampling* in a multi-rate digital signal processing system. Both *downsampling* and *decimation* can be synonymous with *compression*, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction. When the process is performed on a sequence of samples of a *signal* or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate.

The **zero-order hold** (**ZOH**) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication.

**First-order hold** (**FOH**) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, *x*_{s}(*t*), representing the discrete samples, *x*(*nT*), is low-pass filtered to recover the original signal that was sampled, *x*(*t*). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH.

In functional analysis, a **Shannon wavelet** may be either of real or complex type. Signal analysis by ideal bandpass filters defines a decomposition known as Shannon wavelets. The Haar and sinc systems are Fourier duals of each other.

**Nonuniform sampling** is a branch of sampling theory involving results related to the Nyquist–Shannon sampling theorem. Nonuniform sampling is based on Lagrange interpolation and the relationship between itself and the (uniform) sampling theorem. Nonuniform sampling is a generalisation of the Whittaker–Shannon–Kotelnikov (WSK) sampling theorem.

In mathematical analysis and applications, **multidimensional transforms** are used to analyze the frequency content of signals in a domain of two or more dimensions.

The spectrum of a chirp pulse describes its characteristics in terms of its frequency components. This frequency-domain representation is an alternative to the more familiar time-domain waveform, and the two versions are mathematically related by the Fourier transform.

The spectrum is of particular interest when pulses are subject to signal processing. For example, when a chirp pulse is compressed by its matched filter, the resulting waveform contains not only a main narrow pulse but, also, a variety of unwanted artifacts many of which are directly attributable to features in the chirp's spectral characteristics.

The simplest way to derive the spectrum of a chirp, now that computers are widely available, is to sample the time-domain waveform at a frequency well above the Nyquist limit and call up an FFT algorithm to obtain the desired result. As this approach was not an option for the early designers, they resorted to analytic analysis, where possible, or to graphical or approximation methods, otherwise. These early methods still remain helpful, however, as they give additional insight into the behavior and properties of chirps.

- ↑ Nemirovsky, Jonathan; Shimron, Efrat (2015). "Utilizing Bochners Theorem for Constrained Evaluation of Missing Fourier Data". arXiv: 1506.03300 [physics.med-ph].
- 1 2 3 4 Shannon, Claude E. (January 1949). "Communication in the presence of noise".
*Proceedings of the Institute of Radio Engineers*.**37**(1): 10–21. doi:10.1109/jrproc.1949.232969. Reprint as classic paper in:*Proc. IEEE*, Vol. 86, No. 2, (Feb 1998) Archived 2010-02-08 at the Wayback Machine - ↑ Marvasti (ed), F. (2000).
*Nonuniform Sampling, Theory and Practice*. New York: Kluwer Academic/Plenum Publishers.CS1 maint: extra text: authors list (link) - ↑ Landau, H. J. (1967). "Necessary density conditions for sampling and interpolation of certain entire functions".
*Acta Math*.**117**(1): 37–52. doi:10.1007/BF02395039. - ↑ see, e.g., Feng, P. (1997).
*Universal minimum-rate sampling and spectrum-blind reconstruction for multiband signals*. Ph.D. dissertation, University of Illinois at Urbana-Champaign. - ↑ Mishali, Moshe; Eldar, Yonina C. (March 2009). "Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals".
*IEEE Trans. Signal Process*.**57**(3). CiteSeerX 10.1.1.154.4255 . - ↑ Kipnis, Alon; Goldsmith, Andrea J.; Eldar, Yonina C.; Weissman, Tsachy (January 2016). "Distortion rate function of sub-Nyquist sampled Gaussian sources".
*IEEE Transactions on Information Theory*.**62**: 401–429. arXiv: 1405.5329 . doi:10.1109/tit.2015.2485271. - ↑ Kipnis, Alon; Eldar, Yonina; Goldsmith, Andrea (26 April 2018). "Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits".
*IEEE Signal Processing Magazine*.**35**(3): 16–39. arXiv: 1801.06718 . Bibcode:2018ISPM...35...16K. doi:10.1109/MSP.2017.2774249. - ↑ Nyquist, Harry (April 1928). "Certain topics in telegraph transmission theory".
*Trans. AIEE*.**47**(2): 617–644. Bibcode:1928TAIEE..47..617N. doi:10.1109/t-aiee.1928.5055024. Reprint as classic paper in:*Proc. IEEE*, Vol. 90, No. 2, Feb 2002 Archived 2013-09-26 at the Wayback Machine - ↑ Küpfmüller, Karl (1928). "Über die Dynamik der selbsttätigen Verstärkungsregler".
*Elektrische Nachrichtentechnik*(in German).**5**(11): 459–467. (English translation 2005). - ↑ Kotelnikov, V. A. (1933). "On the carrying capacity of the ether and wire in telecommunications".
*Material for the First All-Union Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA*(in Russian). (English translation, PDF). - ↑ Whittaker, E. T. (1915). "On the Functions Which are Represented by the Expansions of the Interpolation Theory".
*Proc. Royal Soc. Edinburgh*.**35**: 181–194. doi:10.1017/s0370164600017806. ("Theorie der Kardinalfunktionen"). - ↑ Whittaker, J. M. (1935).
*Interpolatory Function Theory*. Cambridge, England: Cambridge Univ. Press.. - ↑ Shannon, Claude E. (July 1948). "A Mathematical Theory of Communication".
*Bell System Technical Journal*.**27**(3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:11858/00-001M-0000-002C-4317-B.. - ↑ Shannon, Claude E. (October 1948). "A Mathematical Theory of Communication".
*Bell System Technical Journal*.**27**(4): 623–666. doi:10.1002/j.1538-7305.1948.tb00917.x. hdl:11858/00-001M-0000-002C-4314-2. - ↑ Jerri, Abdul (November 1977). "The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review".
*Proceedings of the IEEE*.**65**(11): 1565–1596. doi:10.1109/proc.1977.10771. See also Jerri, Abdul (April 1979). "Correction to "The Shannon sampling theorem—Its various extensions and applications: A tutorial review"".*Proceedings of the IEEE*.**67**(4): 695. doi:10.1109/proc.1979.11307. - ↑ Lüke, Hans Dieter (April 1999). "The Origins of the Sampling Theorem" (PDF).
*IEEE Communications Magazine*.**37**(4): 106–108. CiteSeerX 10.1.1.163.2887 . doi:10.1109/35.755459. - 1 2 Meijering, Erik (March 2002). "A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing" (PDF).
*Proc. IEEE*.**90**(3): 319–342. doi:10.1109/5.993400. - ↑ Members of the Technical Staff of Bell Telephone Lababoratories (1959).
*Transmission Systems for Communications*. AT&T. pp. 26–4 (Vol.2). - ↑ Guillemin, Ernst Adolph (1963).
*Theory of Linear Physical Systems*. Wiley. - ↑ Roberts, Richard A.; Barton, Ben F. (1965).
*Theory of Signal Detectability: Composite Deferred Decision Theory*. - ↑ Gray, Truman S. (1954).
*Applied Electronics: A First Course in Electronics, Electron Tubes, and Associated Circuits*. - ↑ Blackman, R. B.; Tukey, J. W. (1958).
*The Measurement of Power Spectra : From the Point of View of Communications Engineering*(PDF). New York: Dover. - ↑ Black, Harold S. (1953).
*Modulation Theory*.

- Higgins, J.R.:
*Five short stories about the cardinal series*, Bulletin of the AMS 12(1985) - Küpfmüller, Karl, "Utjämningsförlopp inom Telegraf- och Telefontekniken", ("Transients in telegraph and telephone engineering"), Teknisk Tidskrift, no. 9 pp. 153–160 and 10 pp. 178–182, 1931.
- Marks, R.J.(II):
*Introduction to Shannon Sampling and Interpolation Theory*, Springer-Verlag, 1991. - Marks, R.J.(II), Editor: Advanced Topics in Shannon Sampling and Interpolation Theory, Springer-Verlag, 1993.
- Marks, R.J.(II),
*Handbook of Fourier Analysis and Its Applications,*Oxford University Press, (2009), Chapters 5-8. Google books - Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 13.11. Numerical Use of the Sampling Theorem",
*Numerical Recipes: The Art of Scientific Computing*(3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 - Unser, Michael:
*Sampling-50 Years after Shannon*, Proc. IEEE, vol. 88, no. 4, pp. 569–587, April 2000

Wikimedia Commons has media related to . Nyquist Shannon theorem |

- Learning by Simulations Interactive simulation of the effects of inadequate sampling
- Interactive presentation of the sampling and reconstruction in a web-demo Institute of Telecommunications, University of Stuttgart
- Undersampling and an application of it
- Sampling Theory For Digital Audio
- Journal devoted to Sampling Theory
- Sampling Theorem with Constant Amplitude Variable Width Pulse
- Lüke, Hans Dieter (April 1999). "The Origins of the Sampling Theorem" (PDF).
*IEEE Communications Magazine*.**37**(4): 106–108. CiteSeerX 10.1.1.163.2887 . doi:10.1109/35.755459.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.