# Signal-to-noise ratio

Last updated

Signal-to-noise ratio (abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.

In signal processing, noise is a general term for unwanted modifications that a signal may suffer during capture, storage, transmission, processing, or conversion.

The decibel is a unit of measurement used to express the ratio of one value of a power or field quantity to another on a logarithmic scale, the logarithmic quantity being called the power level or field level, respectively. It can be used to express a change in value or an absolute value. In the latter case, it expresses the ratio of a value to a fixed reference value; when used in this way, a suffix that indicates the reference value is often appended to the decibel symbol. For example, if the reference value is 1 volt, then the suffix is "V", and if the reference value is one milliwatt, then the suffix is "m".

## Contents

While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. Signal-to-noise ratio is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as "noise" that interferes with the "signal" of appropriate discussion. [1]

Isotopes are variants of a particular chemical element which differ in neutron number, and consequently in nucleon number. All isotopes of a given element have the same number of protons but different numbers of neutrons in each atom.

An ice core is a core sample that is typically removed from an ice sheet or a high mountain glacier. Since the ice forms from the incremental buildup of annual layers of snow, lower layers are older than upper, and an ice core contains ice formed over a range of years. Cores are drilled with hand augers or powered drills; they can reach depths of over two miles (3.2 km), and contain ice up to 800,000 years old.

Financial signal processing is a branch of signal processing technologies which applies to financial signals. They are often used by quantitative investors to make best estimation of the movement of equity prices, such as stock prices, options prices, or other types of derivatives.

The signal-to-noise ratio, the bandwidth, and the channel capacity of a communication channel are connected by the Shannon–Hartley theorem.

Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency.

Channel capacity, in electrical engineering, computer science and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.

A communication channel or simply channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used to convey an information signal, for example a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.

## Definition

Signal-to-noise ratio is defined as the ratio of the power of a signal (meaningful information) to the power of background noise (unwanted signal):

In physics, power is the rate of doing work or of transferring heat, i.e. the amount of energy transferred or converted per unit time. Having no direction, it is a scalar quantity. In the International System of Units, the unit of power is the joule per second (J/s), known as the watt in honour of James Watt, the eighteenth-century developer of the condenser steam engine. Another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written:

${\displaystyle \mathrm {SNR} ={\frac {P_{\mathrm {signal} }}{P_{\mathrm {noise} }}},}$

where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth.

Depending on whether the signal is a constant (s) or a random variable (S), the signal to noise ratio for random noise N with expected value of zero becomes: [2]

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

${\displaystyle \mathrm {SNR} ={\frac {s^{2}}{\sigma _{\mathrm {N} }^{2}}}}$
or
${\displaystyle \mathrm {SNR} ={\frac {E[S^{2}]}{\sigma _{\mathrm {N} }^{2}}}}$
where E refers to the expected value, i.e. in this case the mean of ${\displaystyle S^{2}.}$

If the signal and the noise are measured across the same impedance, the SNR can be obtained by calculating the square of the amplitude ratio:

${\displaystyle \mathrm {SNR} ={\frac {P_{\mathrm {signal} }}{P_{\mathrm {noise} }}}=\left({\frac {A_{\mathrm {signal} }}{A_{\mathrm {noise} }}}\right)^{2},}$

where A is root mean square (RMS) amplitude (for example, RMS voltage).

### Decibels

Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as

${\displaystyle P_{\mathrm {signal,dB} }=10\log _{10}\left(P_{\mathrm {signal} }\right)}$

and

${\displaystyle P_{\mathrm {noise,dB} }=10\log _{10}\left(P_{\mathrm {noise} }\right).}$

In a similar manner, SNR may be expressed in decibels as

${\displaystyle \mathrm {SNR_{dB}} =10\log _{10}\left(\mathrm {SNR} \right).}$

Using the definition of SNR

${\displaystyle \mathrm {SNR_{dB}} =10\log _{10}\left({\frac {P_{\mathrm {signal} }}{P_{\mathrm {noise} }}}\right).}$

Using the quotient rule for logarithms

${\displaystyle 10\log _{10}\left({\frac {P_{\mathrm {signal} }}{P_{\mathrm {noise} }}}\right)=10\log _{10}\left(P_{\mathrm {signal} }\right)-10\log _{10}\left(P_{\mathrm {noise} }\right).}$

Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels:

${\displaystyle \mathrm {SNR_{dB}} ={P_{\mathrm {signal,dB} }-P_{\mathrm {noise,dB} }}.}$

In the above formula, P is measured in units of power, such as watts (W) or milliwatts (mW), and the signal-to-noise ratio is a pure number.

However, when the signal and noise are measured in volts (V) or amperes (A), which are measures of amplitude, [note 1] they must first be squared to obtain a quantity proportional to power, as shown below:

${\displaystyle \mathrm {SNR_{dB}} =10\log _{10}\left[\left({\frac {A_{\mathrm {signal} }}{A_{\mathrm {noise} }}}\right)^{2}\right]=20\log _{10}\left({\frac {A_{\mathrm {signal} }}{A_{\mathrm {noise} }}}\right)=\left({A_{\mathrm {signal,dB} }-A_{\mathrm {noise,dB} }}\right).}$

### Dynamic range

The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernible signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS).

SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'.

### Difference from conventional power

In physics, the average power of an AC signal is defined as the average value of voltage times current; for resistive (non-reactive) circuits, where voltage and current are in phase, this is equivalent to the product of the rms voltage and current:

${\displaystyle \mathrm {P} =V_{\mathrm {rms} }I_{\mathrm {rms} }}$
${\displaystyle \mathrm {P} ={\frac {V_{\mathrm {rms} }^{2}}{R}}=I_{\mathrm {rms} }^{2}R}$

But in signal processing and communication, one usually assumes that ${\displaystyle R=1\Omega }$[ citation needed ] so that factor is usually not included while measuring power or energy of a signal. This may cause some confusion among readers, but the resistance factor is not significant for typical operations performed in signal processing, or for computing power ratios. For most cases, the power of a signal would be considered to be simply

${\displaystyle \mathrm {P} =V_{\mathrm {rms} }^{2}={\frac {A^{2}}{2}}}$

where 'A' is the amplitude[ clarification needed ] of the AC signal.

## Alternative definition

An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement: [4] [5]

${\displaystyle \mathrm {SNR} ={\frac {\mu }{\sigma }}}$

where ${\displaystyle \mu }$ is the signal mean or expected value and ${\displaystyle \sigma }$ is the standard deviation of the noise, or an estimate thereof. [note 2] Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance). It is commonly used in image processing, [6] [7] [8] [9] where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above.

It should be also noted, that this definition is closely related to the Sensitivity Index or d', when assuming that the signal has two states, and the noise does not change between the two states.

The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details. [5] [10]

Yet another alternative, very specific and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging).

Related measures are the "contrast ratio" and the "contrast-to-noise ratio".

## SNR for various modulation systems

### Amplitude modulation

Channel signal-to-noise ratio is given by

${\displaystyle \mathrm {(SNR)_{C,AM}} ={\frac {A_{C}^{2}(1+k_{a}^{2}P)}{2WN_{0}}}}$

where W is the bandwidth and ${\displaystyle k_{a}}$ is modulation index

Output signal-to-noise ratio (of AM receiver) is given by

${\displaystyle \mathrm {(SNR)_{O,AM}} ={\frac {A_{c}^{2}k_{a}^{2}P}{2WN_{0}}}}$

### Frequency modulation

Channel signal-to-noise ratio is given by

${\displaystyle \mathrm {(SNR)_{C,FM}} ={\frac {A_{c}^{2}}{2WN_{0}}}}$

Output signal-to-noise ratio is given by

${\displaystyle \mathrm {(SNR)_{O,FM}} ={\frac {A_{c}^{2}k_{f}^{2}P}{2N_{0}W^{3}}}}$

## Improving SNR in practice

All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Otherwise, when the characteristics of the noise are known and are different from the signals, it is possible to filter it or to process the signal.

For example, it is sometimes possible to use a lock-in amplifier to modulate and confine the signal within a very narrow bandwidth and then filter the detected signal to the narrow band where it resides, thereby eliminating most of the broadband noise.

When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurements. In this case the noise goes down as the square root of the number of averaged samples.

Additionally, internal noise of electronic systems can be reduced by low-noise amplifiers.

## Digital signals

When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise").

This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither.

Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density.

The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal.

### Fixed point

For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined.

Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then:

${\displaystyle \mathrm {DR_{dB}} =\mathrm {SNR_{dB}} =20\log _{10}(2^{n})\approx 6.02\cdot n}$

This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB.

Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level [11] and uniform distribution. In this case, the SNR is approximately

${\displaystyle \mathrm {SNR_{dB}} \approx 20\log _{10}(2^{n}{\sqrt {3/2}})\approx 6.02\cdot n+1.761}$

### Floating point

Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent:

${\displaystyle \mathrm {DR_{dB}} =6.02\cdot 2^{m}}$
${\displaystyle \mathrm {SNR_{dB}} =6.02\cdot (n-m)}$

Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. [12]

## Optical SNR

Optical signals have a carrier frequency that is much higher than the modulation frequency (about 200 THz and more). This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer.

## Types and abbreviations

Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for Peak signal-to-noise ratio. GSNR stands for Geometric Signal-to-Noise Ratio. SINR is the Signal-to-noise-plus-interference ratio.

## Notes

1. The connection between optical power and voltage in an imaging system is linear. This usually means that the SNR of the electrical signal is calculated by the 10 log rule. With an interferometric system, however, where interest lies in the signal from one arm only, the field of the electromagnetic wave is proportional to the voltage (assuming that the intensity in the second, the reference arm is constant). Therefore the optical power of the measurement arm is directly proportional to the electrical power and electrical signals from optical interferometry are following the 20 log rule. [3]
2. The exact methods may vary between fields. For example, if the signal data are known to be constant, then ${\displaystyle \sigma }$ can be calculated using the standard deviation of the signal. If the signal data are not constant, then ${\displaystyle \sigma }$ can be calculated from data where the signal is zero or relatively constant.
3. Often special filters are used to weight the noise: DIN-A, DIN-B, DIN-C, DIN-D, CCIR-601; for video, special filters such as comb filters may be used.
4. Maximum possible full scale signal can be charged as peak-to-peak or as RMS. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video.

## Related Research Articles

The neper is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI.

Noise figure (NF) and noise factor (F) are measures of degradation of the signal-to-noise ratio (SNR), caused by components in a signal chain. It is a number by which the performance of an amplifier or a radio receiver can be specified, with lower values indicating better performance.

In electronics, noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature that would produce that level of Johnson–Nyquist noise, thus:

The total harmonic distortion (THD) is a measurement of the harmonic distortion present in a signal and is defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. Distortion factor, a closely related term, is sometimes used as a synonym.

In electronics, gain is a measure of the ability of a two-port circuit to increase the power or amplitude of a signal from the input to the output port by adding energy converted from some power supply to the signal. It is usually defined as the mean ratio of the signal amplitude or power at the output port to the amplitude or power at the input port. It is often expressed using the logarithmic decibel (dB) units. A gain greater than one, that is amplification, is the defining property of an active component or circuit, while a passive circuit will have a gain of less than one.

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Sound pressure or acoustic pressure is the local pressure deviation from the ambient atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa).

Crest factor is a parameter of a waveform, such as alternating current or sound, showing the ratio of peak values to the effective value. In other words, crest factor indicates how extreme the peaks are in a waveform. Crest factor 1 indicates no peaks, such as direct current or a square wave. Higher crest factors indicate peaks, for example sound waves tend to have high crest factors.

Eb/N0 is an important parameter in digital communication or data transmission. It is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

Friis formula or Friis's formula, named after Danish-American electrical engineer Harald T. Friis, is either of two formulas used in telecommunications engineering to calculate the signal-to-noise ratio of a multistage amplifier. One relates to noise factor while the other relates to noise temperature.

Decibels relative to full scale is a unit of measurement for amplitude levels in digital systems, such as pulse-code modulation (PCM), which have a defined maximum peak level. The unit is similar to the units dBov and dBO.

Signal-to-Quantization-Noise Ratio is widely used quality measure in analysing digitizing schemes such as PCM and multimedia codecs. The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error introduced in the analog-to-digital conversion.

The error vector magnitude or EVM is a measure used to quantify the performance of a digital radio transmitter or receiver. A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation cause the actual constellation points to deviate from the ideal locations. Informally, EVM is a measure of how far the points are from the ideal locations.

In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation, for example an audio frequency analog message signal. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.

In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency.

Effective number of bits (ENOB) is a measure of the dynamic range of an analog-to-digital converter (ADC) and its associated circuitry. The resolution of an ADC is specified by the number of bits used to represent the analog value, in principle giving 2N signal levels for an N-bit signal. However, all real ADC circuits introduce noise and distortion. ENOB specifies the resolution of an ideal ADC circuit that would have the same resolution as the circuit under consideration.

The modulation error ratio or MER is a measure used to quantify the performance of a digital radio transmitter or receiver in a communications system using digital modulation. A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation or signal path cause the actual constellation points to deviate from the ideal locations.

The signal-to-noise ratio (SNR) is used in imaging as a physical measure of the sensitivity of a imaging system. Industry standards measure SNR in decibels (dB) of power and therefore apply the 10 log rule to the "pure" SNRratio. In turn, yielding the "sensitivity." Industry standards measure and define sensitivity in terms of the ISO film speed equivalent; SNR:32.04 dB = excellent image quality and SNR:20 dB = acceptable image quality.

## References

1. Breeding, Andy (2004). The Music Internet Untangled: Using Online Services to Expand Your Musical Horizons. Giant Path. p. 128. ISBN   9781932340020.
2. "Signal-to-noise ratio". scholarpedia.org.
3. Michael A. Choma, Marinko V. Sarunic, Changhuei Yang, Joseph A. Izatt. Sensitivity advantage of swept source and Fourier domain optical coherence tomography. Optics Express, 11(18). Sept 2003.
4. D. J. Schroeder (1999). Astronomical optics (2nd ed.). Academic Press. p. 433. ISBN   978-0-12-629810-9.
5. Bushberg, J. T., et al., The Essential Physics of Medical Imaging, (2e). Philadelphia: Lippincott Williams & Wilkins, 2006, p. 280.
6. Rafael C. González, Richard Eugene Woods (2008). Digital image processing. Prentice Hall. p. 354. ISBN   0-13-168728-X.
7. Tania Stathaki (2008). Image fusion: algorithms and applications. Academic Press. p. 471. ISBN   0-12-372529-1.
8. Jitendra R. Raol (2009). Multi-Sensor Data Fusion: Theory and Practice. CRC Press. ISBN   1-4398-0003-0.
9. John C. Russ (2007). The image processing handbook. CRC Press. ISBN   0-8493-7254-2.
10. Rose, Albert (1973). Vision – Human and Electronic. Plenum Press. p. 10. ISBN   9780306307324. [...] to reduce the number of false alarms to below unity, we will need [...] a signal whose amplitude is 4–5 times larger than the rms noise.
11. Defining and Testing Dynamic Parameters in High-Speed ADCsMaxim Integrated Products Application note 728
12. Fixed-Point vs. Floating-Point DSP for Superior AudioRane Corporation technical library