Eb/N0

Last updated
Bit-error rate (BER) vs
E
b
/
N
0
{\displaystyle E_{b}/N_{0}}
curves for different digital modulation methods is a common application example of
E
b
/
N
0
{\displaystyle E_{b}/N_{0}}
. Here an AWGN channel is assumed. PSK BER curves.svg
Bit-error rate (BER) vs curves for different digital modulation methods is a common application example of . Here an AWGN channel is assumed.

In digital communication or data transmission, (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

Contents

As the description implies, is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, is in units of joules (watt-seconds). is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules.

These are the same units as so the ratio is dimensionless; it is frequently expressed in decibels. directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of "bandwidth" to apply to the signal.

But when the signal bandwidth is well defined, is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the "gross" link spectral efficiency in bit/sHz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type. [1]

must be used with care on interference-limited channels since additive white noise (with constant noise density ) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as and added to the thermal noise to produce the overall ratio .

Relation to carrier-to-noise ratio

is closely related to the carrier-to-noise ratio (CNR or ), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection:

where
   is the channel data rate (net bit rate) and
  B is the channel bandwidth.

The equivalent expression in logarithmic form (dB):

Caution: Sometimes, the noise power is denoted by when negative frequencies and complex-valued equivalent baseband signals are considered rather than passband signals, and in that case, there will be a 3 dB difference.

Relation to Es/N0

can be seen as a normalized measure of the energy per symbol to noise power spectral density ():

where is the energy per symbol in joules and ρ is the nominal spectral efficiency in (bits/s)/Hz. [2] is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following:

where M is the number of alternative modulation symbols, e.g. for QPSK and for 8PSK.

This is the energy per bit, not the energy per information bit.

can further be expressed as:

where
   is the carrier-to-noise ratio or signal-to-noise ratio,
  B is the channel bandwidth in hertz, and
   is the symbol rate in baud or symbols per second.

Shannon limit

The Shannon–Hartley theorem says that the limit of reliable information rate (data rate exclusive of error-correcting codes) of a channel depends on bandwidth and signal-to-noise ratio according to:

where
  I is the information rate in bits per second excluding error-correcting codes,
  B is the bandwidth of the channel in hertz,
  S is the total signal power (equivalent to the carrier power C), and
  N is the total noise power in the bandwidth.

This equation can be used to establish a bound on for any system that achieves reliable communication, by considering a gross bit rate R equal to the net bit rate I and therefore an average energy per bit of , with noise spectral density of . For this calculation, it is conventional to define a normalized rate , a bandwidth utilization parameter of bits per second per half hertz, or bits per dimension (a signal of bandwidth B can be encoded with dimensions, according to the Nyquist–Shannon sampling theorem). Making appropriate substitutions, the Shannon limit is:

Which can be solved to get the Shannon-limit bound on :

When the data rate is small compared to the bandwidth, so that is near zero, the bound, sometimes called the ultimate Shannon limit, [3] is:

which corresponds to −1.59 dB.

This often-quoted limit of −1.59 dB applies only to the theoretical case of infinite bandwidth. The Shannon limit for finite-bandwidth signals is always higher.

Cutoff rate

For any given system of coding and decoding, there exists what is known as a cutoff rate, typically corresponding to an about 2 dB above the Shannon capacity limit. [ citation needed ]The cutoff rate used to be thought of as the limit on practical error correction codes without an unbounded increase in processing complexity, but has been rendered largely obsolete by the more recent discovery of turbo codes, low-density parity-check (LDPC) and polar codes.

Related Research Articles

<span class="mw-page-title-main">Bandwidth (signal processing)</span> Range of usable frequencies

Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz.

<span class="mw-page-title-main">Orthogonal frequency-division multiplexing</span> Method of encoding digital data on multiple carrier frequencies

In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission used in digital modulation for encoding digital (binary) data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/5G mobile communications.

In telecommunication and electronics, baud is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel.

Noise figure (NF) and noise factor (F) are figures of merit that indicate degradation of the signal-to-noise ratio (SNR) that is caused by components in a signal chain. These figures of merit are used to evaluate the performance of an amplifier or a radio receiver, with lower values indicating better performance.

In electronics, noise temperature is one way of expressing the level of available noise power introduced by a component or source. The power spectral density of the noise is expressed in terms of the temperature that would produce that level of Johnson–Nyquist noise, thus:

Phase-shift keying (PSK) is a digital modulation process which conveys data by changing (modulating) the phase of a constant frequency carrier wave. The modulation is accomplished by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs, RFID and Bluetooth communication.

Signal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 indicates more signal than noise.

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

<span class="mw-page-title-main">Johnson–Nyquist noise</span> Electronic noise due to thermal vibration within a conductor

Johnson–Nyquist noise is the electronic noise generated by the thermal agitation of the charge carriers inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise increases with temperature. Some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to reduce thermal noise in their circuits. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.

Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.

In telecommunications and computing, bit rate is the number of bits that are conveyed or processed per unit of time.

<span class="mw-page-title-main">Quantization (signal processing)</span> Process of mapping a continuous set to a countable set

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

<span class="mw-page-title-main">Delta-sigma modulation</span> Method for converting signals between digital and analog

Delta-sigma modulation is an oversampling method for encoding signals into low bit depth digital signals at a very high sample-frequency as part of the process of delta-sigma analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). Delta-sigma modulation achieves high quality by utilizing a negative feedback loop during quantization to the lower bit depth that continuously corrects quantization errors and moves quantization noise to higher frequencies well above the original signal's bandwidth. Subsequent low-pass filtering for demodulation easily removes this high frequency noise and time averages to achieve high accuracy in amplitude which can be ultimately encoded as pulse-code modulation (PCM).

In digital modulation, minimum-shift keying (MSK) is a type of continuous-phase frequency-shift keying that was developed in the late 1950s by Collins Radio employees Melvin L. Doelz and Earl T. Heald. Similar to OQPSK, MSK is encoded with bits alternating between quadrature components, with the Q component delayed by half the symbol period.

In a digitally modulated signal or a line code, symbol rate, modulation rate or baud rate is the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time. The symbol rate is measured in baud (Bd) or symbols per second. In the case of a line code, the symbol rate is the pulse rate in pulses per second. Each symbol can represent or convey one or several bits of data. The symbol rate is related to the gross bit rate, expressed in bits per second.

Signal-to-quantization-noise ratio is widely used quality measure in analysing digitizing schemes such as pulse-code modulation (PCM). The SQNR reflects the relationship between the maximum nominal signal strength and the quantization error introduced in the analog-to-digital conversion.

In telecommunications, the carrier-to-noise ratio, often written CNR or C/N, is the signal-to-noise ratio (SNR) of a modulated signal. The term is used to distinguish the CNR of the radio frequency passband signal from the SNR of an analog base band message signal after demodulation. For example, with FM radio, the strength of the 100 MHz carrier with modulations would be considered for CNR, whereas the audio frequency analogue message signal would be for SNR; in each case, compared to the apparent noise. If this distinction is not necessary, the term SNR is often used instead of CNR, with the same definition.

In digital communications shaping codes are a method of encoding that changes the distribution of signals to improve efficiency.

Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse.

An RF chain is a cascade of electronic components and sub-units which may include amplifiers, filters, mixers, attenuators and detectors. It can take many forms, for example, as a wide-band receiver-detector for electronic warfare (EW) applications, as a tunable narrow-band receiver for communications purposes, as a repeater in signal distribution systems, or as an amplifier and up-converters for a transmitter-driver. In this article, the term RF covers the frequency range "Medium Frequencies" up to "Microwave Frequencies", i.e. from 100 kHz to 20 GHz.

References

  1. Chris Heegard and Stephen B. Wicker (1999). Turbo coding. Kluwer. p. 3. ISBN   978-0-7923-8378-9.
  2. Forney, David. "MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 4.2" (PDF). Retrieved 8 November 2017.
  3. Nevio Benvenuto and Giovanni Cherubini (2002). Algorithms for Communications Systems and Their Applications. John Wiley & Sons. p. 508. ISBN   0-470-84389-6.