# Eb/N0

Last updated

In digital communication or data transmission, Eb/N0 (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

## Contents

As the description implies, Eb is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, Eb is in units of joules (watt-seconds). N0 is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules.

These are the same units as Eb so the ratio Eb/N0 is dimensionless; it is frequently expressed in decibels. Eb/N0 directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of "bandwidth" to apply to the signal.

But when the signal bandwidth is well defined, Eb/N0 is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross" link spectral efficiency in bit/sHz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type. [1]

Eb/N0 must be used with care on interference-limited channels since additive white noise (with constant noise density N0) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as I0 and added to the thermal noise N0 to produce the overall ratio Eb/(N0 + I0).

## Relation to carrier-to-noise ratio

Eb/N0 is closely related to the carrier-to-noise ratio (CNR or C/N), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection:

${\displaystyle {\frac {C}{N}}={\frac {E_{\text{b}}}{N_{0}}}{\frac {f_{\text{b}}}{B}}}$

where

fb is the channel data rate (net bit rate), and
B is the channel bandwidth

The equivalent expression in logarithmic form (dB):

${\displaystyle {\text{CNR}}_{\text{dB}}=10\log _{10}\left({\frac {E_{\text{b}}}{N_{0}}}\right)+10\log _{10}\left({\frac {f_{\text{b}}}{B}}\right)}$,

Caution: Sometimes, the noise power is denoted by N0/2 when negative frequencies and complex-valued equivalent baseband signals are considered rather than passband signals, and in that case, there will be a 3 dB difference.

## Relation to Es/N0

Eb/N0 can be seen as a normalized measure of the energy per symbol to noise power spectral density (Es/N0):

${\displaystyle {\frac {E_{b}}{N_{0}}}={\frac {E_{\text{s}}}{\rho N_{0}}}}$,

where Es is the energy per symbol in joules and ρ is the nominal spectral efficiency in bit/sHz. [2] Es/N0 is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following:

${\displaystyle {\frac {E_{\text{s}}}{N_{0}}}={\frac {E_{\text{b}}}{N_{0}}}\log _{2}(M)}$,

where M is the number of alternative modulation symbols, e.g. M = 4 for QPSK and M = 8 for 8PSK.

This is the energy per bit, not the energy per information bit.

Es/N0 can further be expressed as:

${\displaystyle {\frac {E_{\text{s}}}{N_{0}}}={\frac {C}{N}}{\frac {B}{f_{\text{s}}}}}$,

where

C/N is the carrier-to-noise ratio or signal-to-noise ratio.
B is the channel bandwidth in hertz.
fs is the symbol rate in baud or symbols per second.

## Shannon limit

The Shannon–Hartley theorem says that the limit of reliable information rate (data rate exclusive of error-correcting codes) of a channel depends on bandwidth and signal-to-noise ratio according to:

${\displaystyle I

where

I is the information rate in bits per second excluding error-correcting codes;
B is the bandwidth of the channel in hertz;
S is the total signal power (equivalent to the carrier power C); and
N is the total noise power in the bandwidth.

This equation can be used to establish a bound on Eb/N0 for any system that achieves reliable communication, by considering a gross bit rate R equal to the net bit rate I and therefore an average energy per bit of Eb = S/R, with noise spectral density of N0 = N/B. For this calculation, it is conventional to define a normalized rate Rl = R/2B, a bandwidth utilization parameter of bits per second per half hertz, or bits per dimension (a signal of bandwidth B can be encoded with 2B dimensions, according to the Nyquist–Shannon sampling theorem). Making appropriate substitutions, the Shannon limit is:

${\displaystyle {R \over B}=2R_{l}<\log _{2}\left(1+2R_{l}{\frac {E_{\text{b}}}{N_{0}}}\right)}$

Which can be solved to get the Shannon-limit bound on Eb/N0:

${\displaystyle {\frac {E_{\text{b}}}{N_{0}}}>{\frac {2^{2R_{l}}-1}{2R_{l}}}}$

When the data rate is small compared to the bandwidth, so that Rl is near zero, the bound, sometimes called the ultimate Shannon limit, [3] is:

${\displaystyle {\frac {E_{\text{b}}}{N_{0}}}>\ln(2)}$

which corresponds to −1.59 dB.

This often-quoted limit of −1.59 dB applies only to the theoretical case of infinite bandwidth. The Shannon limit for finite-bandwidth signals is always higher.

## Cutoff rate

For any given system of coding and decoding, there exists what is known as a cutoff rateR0, typically corresponding to an Eb/N0 about 2 dB above the Shannon capacity limit. [ citation needed ]The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery of turbo codes and low-density parity-check (LDPC) codes.

## Related Research Articles

Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency.

In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission and a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/5G mobile communications.

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

In telecommunication and electronics, baud is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel.

In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.

Noise figure (NF) and noise factor (F) are measures of degradation of the signal-to-noise ratio (SNR), caused by components in a signal chain. It is a number by which the performance of an amplifier or a radio receiver can be specified, with lower values indicating better performance.

In electronics, noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature that would produce that level of Johnson–Nyquist noise, thus:

Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the phase of a constant frequency reference signal. The modulation is accomplished by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.

Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:

Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.

In telecommunications and computing, bit rate is the number of bits that are conveyed or processed per unit of time.

In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.

Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the medium access control.

Multiple frequency-shift keying (MFSK) is a variation of frequency-shift keying (FSK) that uses more than two frequencies. MFSK is a form of M-ary orthogonal modulation, where each symbol consists of one element from an alphabet of orthogonal waveforms. M, the size of the alphabet, is usually a power of two so that each symbol represents log2M bits.

Signal-to-quantization-noise ratio is widely used quality measure in analysing digitizing schemes such as pulse-code modulation (PCM). The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error introduced in the analog-to-digital conversion.

In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation, for example an audio frequency analog message signal. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.

In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency.

In coding theory and related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC).

## References

1. Chris Heegard and Stephen B. Wicker (1999). Turbo coding. Kluwer. p. 3. ISBN   978-0-7923-8378-9.
2. Forney, David. "MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 4.2" (PDF). Retrieved 8 November 2017.
3. Nevio Benvenuto and Giovanni Cherubini (2002). Algorithms for Communications Systems and Their Applications. John Wiley & Sons. p. 508. ISBN   0-470-84389-6.