In signal processing and related disciplines, **aliasing** is an effect that causes different signals to become indistinguishable (or *aliases* of one another) when sampled. It also often refers to the distortion or artifact that results when a signal reconstructed from samples is different from the original continuous signal.

- Description
- Bandlimited functions
- Bandpass signals
- Sampling sinusoidal functions
- Folding
- Complex sinusoids
- Sample frequency
- Historical usage
- Angular aliasing
- More examples
- Online audio example
- Direction finding
- See also
- Notes
- Citations
- Further reading

Aliasing can occur in signals sampled in time, for instance digital audio, and is referred to as **temporal aliasing**. It can also occur in spatially sampled signals (e.g. moiré patterns in digital images); this type of aliasing is called **spatial aliasing**.

Aliasing is generally avoided by applying low pass filters or anti-aliasing filters (AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitable reconstruction filtering should then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. For spatial anti-aliasing, the types of anti-aliasing include fast sample anti-aliasing (FSAA), multisample anti-aliasing, and supersampling.

When a digital image is viewed, a reconstruction is performed by a display or printer device, and by the eyes and the brain. If the image data is processed in some way during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen.

An example of spatial aliasing is the moiré pattern observed in a poorly pixelized image of a brick wall. Spatial anti-aliasing techniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasing *prealiasing* and reconstruction aliasing *postaliasing.*^{ [1] }

Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32000 samples per second (Hz), any frequency components at or above 16000 Hz (the Nyquist frequency for this sampling rate) will cause aliasing when the music is reproduced by a digital-to-analog converter (DAC). To prevent this, an anti-aliasing filter is used to remove components above the Nyquist frequency prior to sampling.

In video or cinematography, temporal aliasing results from the limited frame rate, and causes the wagon-wheel effect, whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as a negative frequency. Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming.^{ [2] }^{[ unreliable source? ]}

Like the video camera, most sampling schemes are periodic; that is, they have a characteristic sampling frequency in time or in space. Digital cameras provide a certain number of samples (pixels) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled (digitized) with an analog-to-digital converter, which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content.

Actual signals have a finite duration and their frequency content, as defined by the Fourier transform, has no upper bound. Some amount of aliasing always occurs when such functions are sampled. Functions whose frequency content is bounded (*bandlimited*) have an infinite duration in the time domain. If sampled at a high enough rate, determined by the *bandwidth*, the original function can, in theory, be perfectly reconstructed from the infinite set of samples.

Sometimes aliasing is used intentionally on signals with no low-frequency content, called *bandpass* signals. Undersampling, which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers^{ [3] } exploit aliasing in this way for computational efficiency. See Sampling (signal processing), Nyquist rate (relative to sampling), and Filter bank.

Sinusoids are an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with a Fourier series or transform). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum.

When sampling a function at frequency *f*_{s} (intervals 1/*f*_{s}), the following functions yield identical sets of samples: {sin(2π( *f+Nf*_{s}) *t* + φ), *N* = 0, ±1, ±2, ±3,...}. A frequency spectrum of the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So the functions and their frequencies are said to be *aliases* of each other. Noting the trigonometric identity:

we can write all the alias frequencies as positive values: *f*_{N}(* f *) |* f*+*N f*_{s}|.

For example, here a plot depicts a set of samples with parameter *f*_{s} = 1, and two different sinusoids that could have produced the samples. Nine cycles of the red sinusoid and one cycle of the blue sinusoid span an interval of 10 samples. The corresponding number of *cycles per sample* are *f*_{red} = 0.9*f*_{s} and *f*_{blue} = 0.1*f*_{s}. So the *N* = −1 alias of *f*_{red} is *f*_{blue} (and vice versa).

Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of the *f*_{N}(* f *) frequencies. So it is usually important that *f*_{0}(* f *) be the unique minimum. A necessary and sufficient condition for that is *f*_{s}/2 > |* f *|, where *f*_{s}/2 is commonly called the Nyquist frequency of a system that samples at rate *f*_{s}. In our example, the Nyquist condition is satisfied if the original signal is the blue sinusoid (* f* = *f*_{blue}). But if *f* = *f*_{red} = 0.9*f*_{s}, the usual reconstruction method will produce the blue sinusoid instead of the red one.

In the example above, *f*_{red} and *f*_{blue} are symmetrical around the frequency *f*_{s}/2. And in general, as f increases from 0 to *f*_{s}/2, *f*_{−1}(* f *) decreases from *f*_{s} to *f*_{s}/2. Similarly, as f increases from *f*_{s}/2 to *f*_{s}, *f*_{−1}(* f *) continues decreasing from *f*_{s}/2 to 0.

A graph of amplitude vs frequency for a single sinusoid at frequency 0.6 *f*_{s} and some of its aliases at 0.4 *f*_{s}, 1.4 *f*_{s}, and 1.6 *f*_{s} would look like the 4 black dots in the first figure below. The red lines depict the paths (loci) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (between *f*_{s}/2 and *f*_{s}). No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 and *f*_{s}. This symmetry is commonly referred to as **folding**, and another name for *f*_{s}/2 (the Nyquist frequency) is **folding frequency**. Folding is often observed in practice when viewing the frequency spectrum of real-valued samples, such as the second figure below.

Complex sinusoids are waveforms whose samples are complex numbers, and the concept of negative frequency is necessary to distinguish them. In that case, the frequencies of the aliases are given by just**:** *f*_{N}(* f *) = *f* + *N f*_{s}. Therefore, as f increases from *f*_{s}/2 to *f*_{s}, *f*_{−1}(* f *) goes from –*f*_{s}/2 __up__ to 0. Consequently, complex sinusoids do not exhibit *folding*. Complex samples of real-valued sinusoids have zero-valued imaginary parts and do exhibit folding.

When the condition *f*_{s}/2 > * f * is met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called the Nyquist criterion. That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called an anti-aliasing filter.

The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via the Whittaker–Shannon interpolation formula) is a customary measure of the effectiveness of sampling.

Historically the term *aliasing* evolved from radio engineering because of the action of superheterodyne receivers. When the receiver shifts multiple signals down to lower frequencies, from RF to IF by heterodyning, an unwanted signal, from an RF frequency equally far from the local oscillator (LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as an *image* or *alias* of the desired signal.

Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity.

Spatial aliasing, particular of angular frequency, can occur when reproducing a light field ^{ [4] } or sound field with discrete elements, as in 3D displays or wave field synthesis of sound.

This aliasing is visible in images such as posters with lenticular printing: if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field.

The lack of parallax on viewer movement in 2D images and in 3-D film produced by stereoscopic glasses (in 3D films the effect is called "yawing", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant).

The qualitative effects of aliasing can be heard in the following audio demonstration. Six sawtooth waves are played in succession, with the first two sawtooths having a fundamental frequency of 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate between bandlimited (non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22.05 kHz. The bandlimited sawtooths are synthesized from the sawtooth waveform's Fourier series such that no harmonics above the Nyquist frequency are present.

The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental.

A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points per wavelength, or the wave arrival direction becomes ambiguous.^{ [5] }

Wikimedia Commons has media related to . Aliasing |

- ↑ Mitchell, Don P.; Netravali, Arun N. (August 1988).
*Reconstruction filters in computer-graphics*(PDF). ACM SIGGRAPH International Conference on Computer Graphics and Interactive Techniques.**22**. pp. 221–228. doi:10.1145/54852.378514. ISBN 0-89791-275-6. - ↑ Tessive, LLC (2010)."Time Filter Technical Explanation"
- ↑ harris, frederic j. (Aug 2006).
*Multirate Signal Processing for Communication Systems*. Upper Saddle River, NJ: Prentice Hall PTR. ISBN 978-0-13-146511-4. - ↑ The (New) Stanford Light Field Archive
- ↑ Flanagan, James L., "Beamwidth and useable bandwidth of delay-steered microphone arrays",
*AT&T Tech. J.*, 1985, 64, pp. 983–995

- Pharr, Matt; Humphreys, Greg. (28 June 2010).
*Physically Based Rendering: From Theory to Implementation*. Morgan Kaufmann. ISBN 978-0-12-375079-2. Chapter 7 (*Sampling and reconstruction*). Retrieved 3 March 2013.

- Aliasing by a sampling oscilloscope on YouTube by Tektronix Application Engineer
- Anti-Aliasing Filter Primer by La Vida Leica, discusses its purpose and effect on recorded images
- Interactive examples demonstrating the aliasing effect

**Bandwidth** is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to *passband bandwidth* or *baseband bandwidth*. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency.

In the field of digital signal processing, the **sampling theorem** is a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of *samples* to capture all the information from a continuous-time signal of finite bandwidth.

In electronics, an **analog-to-digital converter** is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

In telecommunication, **intersymbol interference** (**ISI**) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus making the communication less reliable. The spreading of the pulse beyond its allotted time interval causes it to interfere with neighboring pulses. ISI is usually caused by multipath propagation or the inherent linear or non-linear frequency response of a communication channel causing successive symbols to "blur" together.

In signal processing, the **Nyquist rate**, named after Harry Nyquist, is twice the bandwidth of a bandlimited function or a bandlimited channel. This term means two different things under two different circumstances:

- as a lower bound for the sample rate for alias-free signal sampling and
- as an upper bound for the symbol rate across a bandwidth-limited baseband channel such as a telegraph line or passband channel such as a limited radio frequency band or a frequency division multiplex channel.

The **sawtooth wave** is a kind of non-sinusoidal waveform. It is so named based on its resemblance to the teeth of a plain-toothed saw with a zero rake angle.

**Pulse width modulation** (**PWM**), or **pulse-duration modulation** (**PDM**), is a method of reducing the average power delivered by an electrical signal, by effectively chopping it up into discrete parts. The average value of voltage fed to the load is controlled by turning the switch between supply and load on and off at a fast rate. The longer the switch is on compared to the off periods, the higher the total power supplied to the load. Along with MPPT maximum power point tracking, it is one of the primary methods of reducing the output of solar panels to that which can be utilized by a battery. PWM is particularly suited for running inertial loads such as motors, which are not as easily affected by this discrete switching, because they have inertia to react slow. The PWM switching frequency has to be high enough not to affect the load, which is to say that the resultant waveform perceived by the load must be as smooth as possible.

The **Whittaker–Shannon interpolation formula** or **sinc interpolation** is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called **Shannon's interpolation formula** and **Whittaker's interpolation formula**. E. T. Whittaker, who published it in 1915, called it the **Cardinal series**.

In digital signal processing, **spatial anti-aliasing** is a technique for minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications.

Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.

The **Nyquist frequency**, named after electronic engineer Harry Nyquist, is half of the sampling rate of a discrete signal processing system. It is sometimes known as the folding frequency of a sampling system. An example of folding is depicted in Figure 1, where f_{s} is the sampling rate and 0.5 f_{s} is the corresponding Nyquist frequency. The black dot plotted at 0.6 f_{s} represents the amplitude and frequency of a sinusoidal function whose frequency is 60% of the sample-rate (f_{s}). The other three dots indicate the frequencies and amplitudes of three other sinusoids that would produce the same set of samples as the actual sinusoid that was sampled. The symmetry about 0.5 f_{s} is referred to as *folding*.

In signal processing, **sampling** is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of samples.

In signal processing, **undersampling** or **bandpass sampling** is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate, but is still able to reconstruct the signal.

**Bandlimiting** is the limiting of a signal's frequency domain representation or spectral density to zero above a certain finite frequency.

An **anti-aliasing filter** (**AAF**) is a filter used before a signal sampler to restrict the bandwidth of a signal to approximately or completely satisfy the Nyquist–Shannon sampling theorem over the band of interest. Since the theorem states that unambiguous reconstruction of the signal from its samples is possible when the power of frequencies above the Nyquist frequency is zero, a real anti-aliasing filter trades off between bandwidth and aliasing. A realizable anti-aliasing filter will typically either permit some aliasing to occur or else attenuate some in-band frequencies close to the Nyquist limit. For this reason, many practical systems sample higher than would be theoretically required by a perfect AAF in order to ensure that all frequencies of interest can be reconstructed, a practice called oversampling.

In signal processing, **oversampling** is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.

In a mixed-signal system, a **reconstruction filter**, sometimes called an **anti-imaging filter**, is used to construct a smooth analog signal from a digital input, as in the case of a digital to analog converter (DAC) or other sampled data output device.

The **optical transfer function** (**OTF**) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are handled by the system. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the **modulation transfer function** (**MTF**), neglects phase effects, but is equivalent to the OTF in many situations.

A **Bitcrusher** is a lo-fi digital audio effect, which produces a distortion by the reduction of the resolution or bandwidth of digital audio data. The resulting quantization noise may produce a “warmer” sound impression, or a harsh one, depending on the amount of reduction.

**Normalized frequency** is a unit of measurement of frequency equivalent to *cycles/sample*. In digital signal processing (DSP), the continuous time variable, **t**, with units of *seconds*, is replaced by the discrete integer variable, **n**, with units of *samples*. More precisely, the time variable, in *seconds*, has been normalized (divided) by the sampling interval, **T** (*seconds/sample*), which causes time to have convenient integer values at the moments of sampling. This practice is analogous to the concept of natural units, meaning that the natural unit of time in a DSP system is *samples*.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.