# Signal-to-quantization-noise ratio

Last updated

Signal-to-quantization-noise ratio (SQNR or SNqR) is widely used quality measure in analysing digitizing schemes such as pulse-code modulation (PCM). The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error (also known as quantization noise) introduced in the analog-to-digital conversion.

The SQNR formula is derived from the general signal-to-noise ratio (SNR) formula:

${\displaystyle \mathrm {SNR} ={\frac {3\times 2^{2n}}{1+4P_{e}\times (2^{2n}-1)}}{\frac {m_{m}(t)^{2}}{m_{p}(t)^{2}}}}$

where:

${\displaystyle P_{e}}$ is the probability of received bit error
${\displaystyle m_{p}(t)}$ is the peak message signal level
${\displaystyle m_{m}(t)}$ is the mean message signal level

As SQNR applies to quantized signals, the formulae for SQNR refer to discrete-time digital signals. Instead of ${\displaystyle m(t)}$, the digitized signal ${\displaystyle x(n)}$ will be used. For ${\displaystyle N}$ quantization steps, each sample, ${\displaystyle x}$ requires ${\displaystyle \nu =\log _{2}N}$ bits. The probability distribution function (pdf) representing the distribution of values in ${\displaystyle x}$ and can be denoted as ${\displaystyle f(x)}$. The maximum magnitude value of any ${\displaystyle x}$ is denoted by ${\displaystyle x_{max}}$.

As SQNR, like SNR, is a ratio of signal power to some noise power, it can be calculated as:

${\displaystyle \mathrm {SQNR} ={\frac {P_{signal}}{P_{noise}}}={\frac {E[x^{2}]}{E[{\tilde {x}}^{2}]}}}$

The signal power is:

${\displaystyle {\overline {x^{2}}}=E[x^{2}]=P_{x^{\nu }}=\int _{}^{}x^{2}f(x)dx}$

The quantization noise power can be expressed as:

${\displaystyle E[{\tilde {x}}^{2}]={\frac {x_{max}^{2}}{3\times 4^{\nu }}}}$

Giving:

${\displaystyle \mathrm {SQNR} ={\frac {3\times 4^{\nu }\times {\overline {x^{2}}}}{x_{max}^{2}}}}$

When the SQNR is desired in terms of decibels (dB), a useful approximation to SQNR is:

${\displaystyle \mathrm {SQNR} |_{dB}=P_{x^{\nu }}+6.02\nu +4.77}$

where ${\displaystyle \nu }$ is the number of bits in a quantized sample, and ${\displaystyle P_{x^{\nu }}}$ is the signal power calculated above. Note that for each bit added to a sample, the SQNR goes up by approximately 6dB (${\displaystyle 20\times log_{10}(2)}$).

## Related Research Articles

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

Noise figure (NF) and noise factor (F) are measures of degradation of the signal-to-noise ratio (SNR), caused by components in a signal chain. It is a number by which the performance of an amplifier or a radio receiver can be specified, with lower values indicating better performance.

Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally-distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Tunable diode laser absorption spectroscopy is a technique for measuring the concentration of certain species such as methane, water vapor and many more, in a gaseous mixture using tunable diode lasers and laser absorption spectrometry. The advantage of TDLAS over other techniques for concentration measurement is its ability to achieve very low detection limits. Apart from concentration, it is also possible to determine the temperature, pressure, velocity and mass flux of the gas under observation. TDLAS is by far the most common laser based absorption technique for quantitative assessments of species in gas phase.

In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe, and has become associated with Alexander Polyakov after he made use of it in quantizing the string. The action reads

In signal processing, a matched filter is obtained by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.

Large eddy simulation (LES) is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and first explored by Deardorff (1970). LES is currently applied in a wide variety of engineering applications, including combustion, acoustics, and simulations of the atmospheric boundary layer.

In digital communication or data transmission, Eb/N0 is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice.

In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation, for example an audio frequency analog message signal. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.

In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.

Effective number of bits (ENOB) is a measure of the dynamic range of an analog-to-digital converter (ADC), digital-to-analog converter, or their associated circuitry. The resolution of an ADC is specified by the number of bits used to represent the analog value. Ideally, a 12-bit ADC will have an effective number of bits of almost 12. However, real signals have noise, and real circuits are imperfect and introduce additional noise and distortion. Those imperfections reduce the number of bits of accuracy in the ADC. The ENOB describes the effective resolution of the system in bits. An ADC may have 12-bit resolution, but the effective number of bits when used in a system may be 9.5.

In probability theory and statistics, the Conway–Maxwell–Poisson distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Poisson distribution by adding a parameter to model overdispersion and underdispersion. It is a member of the exponential family, has the Poisson distribution and geometric distribution as special cases and the Bernoulli distribution as a limiting case.

f(R) is a type of modified gravity theory which generalizes Einstein's general relativity. f(R) gravity is actually a family of theories, each one defined by a different function, f, of the Ricci scalar, R. The simplest case is just the function being equal to the scalar; this is general relativity. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. Some functional forms may be inspired by corrections arising from a quantum theory of gravity. f(R) gravity was first proposed in 1970 by Hans Adolph Buchdahl. It has become an active field of research following work by Starobinsky on cosmic inflation. A wide range of phenomena can be produced from this theory by adopting different functions; however, many functional forms can now be ruled out on observational grounds, or because of pathological theoretical problems.

Signal averaging is a signal processing technique applied in the time domain, intended to increase the strength of a signal relative to noise that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements.

In statistics, the multivariate Behrens–Fisher problem is the problem of testing for the equality of means from two multivariate normal distributions when the covariance matrices are unknown and possibly not equal. Since this is a generalization of the univariate Behrens-Fisher problem, it inherits all of the difficulties that arise in the univariate problem.

In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduced it in his PhD thesis in 1951. It is commonly utilized in time series analysis and signal processing for parameter estimation and signal detection.

## References

• B. P. Lathi , Modern Digital and Analog Communication Systems (3rd edition), Oxford University Press, 1998