Name | Symbol | Multiple | |
---|---|---|---|
bit per second | bit/s | 1 | 1 |
Metric prefixes (SI) | |||
kilobit per second | kbit/s | 103 | 10001 |
megabit per second | Mbit/s | 106 | 10002 |
gigabit per second | Gbit/s | 109 | 10003 |
terabit per second | Tbit/s | 1012 | 10004 |
Binary prefixes (IEC 80000-13) | |||
kibibit per second | Kibit/s | 210 | 10241 |
mebibit per second | Mibit/s | 220 | 10242 |
gibibit per second | Gibit/s | 230 | 10243 |
tebibit per second | Tibit/s | 240 | 10244 |
In telecommunications and computing, bit rate (bitrate or as a variable R) is the number of bits that are conveyed or processed per unit of time. [1]
The bit rate is expressed in the unit bit per second (symbol: bit/s), often in conjunction with an SI prefix such as kilo (1 kbit/s = 1,000 bit/s), mega (1 Mbit/s = 1,000 kbit/s), giga (1 Gbit/s = 1,000 Mbit/s) or tera (1 Tbit/s = 1,000 Gbit/s). [2] The non-standard abbreviation bps is often used to replace the standard symbol bit/s, so that, for example, 1 Mbps is used to mean one million bits per second.
In most computing and digital communication environments, one byte per second (symbol: B/s) corresponds to 8 bit/s.
When quantifying large or small bit rates, SI prefixes (also known as metric prefixes or decimal prefixes) are used, thus: [3]
0.001 bit/s | = 1 mbit/s (one millibit per second, i.e., one bit per thousand seconds) |
1 bit/s | = 1 bit/s (one bit per second) |
1,000 bit/s | = 1 kbit/s (one kilobit per second, i.e., one thousand bits per second) |
1,000,000 bit/s | = 1 Mbit/s (one megabit per second, i.e., one million bits per second) |
1,000,000,000 bit/s | = 1 Gbit/s (one gigabit per second, i.e., one billion bits per second) |
1,000,000,000,000 bit/s | = 1 Tbit/s (one terabit per second, i.e., one trillion bits per second) |
Binary prefixes are sometimes used for bit rates. [4] [5] The International Standard (IEC 80000-13) specifies different symbols for binary and decimal (SI) prefixes (e.g., 1 KiB/s = 1024 B/s = 8192 bit/s, and 1 MiB/s = 1024 KiB/s).
In digital communication systems, the physical layer gross bitrate, [6] raw bitrate, [7] data signaling rate , [8] gross data transfer rate [9] or uncoded transmission rate [7] (sometimes written as a variable Rb [6] [7] or fb [10] ) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead.
In case of serial communications, the gross bit rate is related to the bit transmission time as:
The gross bit rate is related to the symbol rate or modulation rate, which is expressed in bauds or symbols per second. However, the gross bit rate and the baud value are equal only when there are only two levels per symbol, representing 0 and 1, meaning that each symbol of a data transmission system carries exactly one bit of data; for example, this is not the case for modern modulation systems used in modems and LAN equipment. [11]
For most line codes and modulation methods:
More specifically, a line code (or baseband transmission scheme) representing the data using pulse-amplitude modulation with different voltage levels, can transfer bits per pulse. A digital modulation method (or passband transmission scheme) using different symbols, for example amplitudes, phases or frequencies, can transfer bits per symbol. This results in:
An exception from the above is some self-synchronizing line codes, for example Manchester coding and return-to-zero (RTZ) coding, where each bit is represented by two pulses (signal states), resulting in:
A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral bandwidth in hertz is given by the Nyquist law:
In practice this upper bound can only be approached for line coding schemes and for so-called vestigial sideband digital modulation. Most other digital carrier-modulated schemes, for example ASK, PSK, QAM and OFDM, can be characterized as double sideband modulation, resulting in the following relation:
In case of parallel communication, the gross bit rate is given by
where n is the number of parallel channels, Mi is the number of symbols or levels of the modulation in the ith channel, and Ti is the symbol duration time, expressed in seconds, for the ith channel.
The physical layer net bitrate, [12] information rate, [6] useful bit rate, [13] payload rate, [14] net data transfer rate, [9] coded transmission rate, [7] effective data rate [7] or wire speed (informal language) of a digital communication channel is the capacity excluding the physical layer protocol overhead, for example time division multiplex (TDM) framing bits, redundant forward error correction (FEC) codes, equalizer training symbols and other channel coding. Error-correcting codes are common especially in wireless communication systems, broadband modem standards and modern copper-based high-speed LANs. The physical layer net bitrate is the datarate measured at a reference point in the interface between the data link layer and physical layer, and may consequently include data link and higher layer overhead.
In modems and wireless systems, link adaptation (automatic adaptation of the data rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission mode, used for example when the distance is very short between sender and transmitter. [15] Some operating systems and network equipment may detect the "connection speed" [16] (informal language) of a network access technology or communication device, implying the current net bit rate. The term line rate in some textbooks is defined as gross bit rate, [14] in others as net bit rate.
The relationship between the gross bit rate and net bit rate is affected by the FEC code rate according to the following.
The connection speed of a technology that involves forward error correction typically refers to the physical layer net bit rate in accordance with the above definition.
For example, the net bitrate (and thus the "connection speed") of an IEEE 802.11a wireless network is the net bit rate of between 6 and 54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s inclusive of error-correcting codes.
The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-channel) of 64+64+16 = 144 kbit/s also refers to the payload data rates, while the D channel signalling rate is 16 kbit/s.
The net bit rate of the Ethernet 100BASE-TX physical layer standard is 100 Mbit/s, while the gross bitrate is 125 Mbit/s, due to the 4B5B (four bit over five bit) encoding. In this case, the gross bit rate is equal to the symbol rate or pulse rate of 125 megabaud, due to the NRZI line code.
In communications technologies without forward error correction and other physical layer protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate. For example, the net as well as gross bit rate of Ethernet 10BASE-T is 10 Mbit/s. Due to the Manchester line code, each bit is represented by two pulses, resulting in a pulse rate of 20 megabaud.
The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since there is no additional error-correction code. It can be up to 56,000 bit/s downstream and 48,000 bit/s upstream. A lower bit rate may be chosen during the connection establishment phase due to adaptive modulation –slower but more robust modulation schemes are chosen in case of poor signal-to-noise ratio. Due to data compression, the actual data transmission rate or throughput (see below) may be higher.
The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the maximum net bitrate, exclusive of forward error correction coding, that is possible without bit errors for a certain physical analog node-to-node communication link.
The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is called Hartley's law. Consequently, the net bit rate is sometimes called digital bandwidth capacity in bit/s.
The term throughput , essentially the same thing as digital bandwidth consumption, denotes the achieved average useful bit rate in a computer network over a logical or physical communication link or through a network node, typically measured at a reference point above the data link layer. This implies that the throughput often excludes data link layer protocol overhead. The throughput is affected by the traffic load from the data source in question, as well as from other sources sharing the same network resources. See also measuring network throughput.
Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to the application layer, exclusive of all protocol overhead, data packets retransmissions, etc. For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate. The file transfer rate in bit/s can be calculated as the file size (in bytes) divided by the file transfer time (in seconds) and multiplied by eight.
As an example, the goodput or data transfer rate of a V.92 voiceband modem is affected by the modem physical layer and data link layer protocols. It is sometimes higher than the physical layer data rate due to V.44 data compression, and sometimes lower due to bit-errors and automatic repeat request retransmissions.
If no data compression is provided by the network equipment or protocols, we have the following relation:
for a certain communication path.
These are examples of physical layer net bit rates in proposed communication standard interfaces and devices:
WAN modems | Ethernet LAN | WiFi WLAN | Mobile data |
---|---|---|---|
|
|
|
In digital multimedia, bit rate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors:
Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played.
If lossy data compression is used on audio or visual data, differences from the original signal will be introduced; if the compression is substantial, or lossy data is decompressed and recompressed, this may become noticeable in the form of compression artifacts. Whether these affect the perceived quality, and if so how much, depends on the compression scheme, encoder power, the characteristics of the input data, the listener's perceptions, the listener's familiarity with artifacts, and the listening or viewing environment.
The encoding bit rate of a multimedia file is its size in bytes divided by the playback time of the recording (in seconds), multiplied by eight.
For real-time streaming multimedia, the encoding bit rate is the goodput that is required to avoid playback interruption.
The term average bitrate is used in case of variable bitrate multimedia source coding schemes. In this context, the peak bit rate is the maximum number of bits required for any short-term block of compressed data. [17]
A theoretical lower bound for the encoding bit rate for lossless data compression is the source information rate, also known as the entropy rate.
The bitrates in this section are approximately the minimum that the average listener in a typical listening or viewing environment, when using the best available compression, would perceive as not significantly worse than the reference standard.
Compact Disc Digital Audio (CD-DA) uses 44,100 samples per second, each with a bit depth of 16, a format sometimes abbreviated like "16bit / 44.1kHz". CD-DA is also stereo, using a left and right channel, so the amount of audio data per second is double that of mono, where only a single channel is used.
The bit rate of PCM audio data can be calculated with the following formula:
For example, the bit rate of a CD-DA recording (44.1 kHz sampling rate, 16 bits per sample and two channels) can be calculated as follows:
The cumulative size of a length of PCM audio data (excluding a file header or other metadata) can be calculated using the following formula:
The cumulative size in bytes can be found by dividing the file size in bits by the number of bits in a byte, which is eight:
Therefore, 80 minutes (4,800 seconds) of CD-DA data requires 846,720,000 bytes of storage:
where MiB is mebibytes with binary prefix Mi, meaning 220 = 1,048,576.
The MP3 audio format provides lossy data compression. Audio quality improves with increasing bitrate:
For technical reasons (hardware/software protocols, overheads, encoding schemes, etc.) the actual bit rates used by some of the compared-to devices may be significantly higher than what is listed above. For example, telephone circuits using μlaw or A-law companding (pulse code modulation) yield 64 kbit/s.
Enhanced Data rates for GSM Evolution (EDGE), also known as 2.75G, Enhanced GPRS (EGPRS), IMT Single Carrier (IMT-SC), and Enhanced Data rates for Global Evolution, is a 2G digital mobile phone technology for data transmission. It is a subset of General Packet Radio Service (GPRS) on the GSM network and improves upon it offering speeds close to 3G technology, hence the name 2.75G. It is also recognized as part of the International Mobile Telecommunications - 2000 (IMT-2000) standard.
MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical.
Network throughput refers to the rate of message delivery over a communication channel in a communication network, such as Ethernet or packet radio. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in packets per second or data packets per time slot.
In telecommunications and electronics, baud is a common unit of measurement of symbol rate, which is one of the components that determine the speed of communication over a data channel.
G.711 is a narrowband audio codec originally designed for use in telephony that provides toll-quality audio at 64 kbit/s. It is an ITU-T standard (Recommendation) for audio encoding, titled Pulse code modulation (PCM) of voice frequencies released for use in 1972.
MPEG-1 Audio Layer II or MPEG-2 Audio Layer II is a lossy audio compression format defined by ISO/IEC 11172-3 alongside MPEG-1 Audio Layer I and MPEG-1 Audio Layer III (MP3). While MP3 is much more popular for PC and Internet applications, MP2 remains a dominant standard for audio broadcasting.
Digital Radio Mondiale is a set of digital audio broadcasting technologies designed to work over the bands currently used for analogue radio broadcasting including AM broadcasting—particularly shortwave—and FM broadcasting. DRM is more spectrally efficient than AM and FM, allowing more stations, at higher quality, into a given amount of bandwidth, using xHE-AAC audio coding format. Various other MPEG-4 codecs and Opus are also compatible, but the standard now specifies xHE-AAC.
Near Instantaneous Companded Audio Multiplex (NICAM) is an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.
Continuously variable slope delta modulation is a voice coding method. It is a delta modulation with variable step size, first proposed by Greefkes and Riemens in 1970.
G.722 is an ITU-T standard 7 kHz wideband audio codec operating at 48, 56 and 64 kbit/s. It was approved by ITU-T in November 1988. Technology of the codec is based on sub-band ADPCM (SB-ADPCM). The corresponding narrow-band codec based on the same technology is G.726.
Spectral efficiency, spectrum efficiency or bandwidth efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the medium access control.
Dolby Digital Plus, also known as Enhanced AC-3, is a digital audio compression scheme developed by Dolby Labs for the transport and storage of multi-channel digital audio. It is a successor to Dolby Digital (AC-3), and has a number of improvements over that codec, including support for a wider range of data rates, an increased channel count, and multi-program support, as well as additional tools (algorithms) for representing compressed data and counteracting artifacts. Whereas Dolby Digital (AC-3) supports up to five full-bandwidth audio channels at a maximum bitrate of 640 kbit/s, E-AC-3 supports up to 15 full-bandwidth audio channels at a maximum bitrate of 6.144 Mbit/s.
In a digitally modulated signal or a line code, symbol rate, modulation rate or baud rate is the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time. The symbol rate is measured in baud (Bd) or symbols per second. In the case of a line code, the symbol rate is the pulse rate in pulses per second. Each symbol can represent or convey one or several bits of data. The symbol rate is related to the gross bit rate, expressed in bits per second.
High-bit-rate digital subscriber line (HDSL) is a telecommunications protocol standardized in 1994. It was the first digital subscriber line (DSL) technology to use a higher frequency spectrum over copper, twisted pair cables. HDSL was developed to transport DS1 services at 1.544 Mbit/s and 2.048 Mbit/s over telephone local loops without a need for repeaters. Successor technology to HDSL includes HDSL2 and HDSL4, proprietary SDSL, and G.SHDSL.
MPEG-1 Audio Layer I, commonly abbreviated to MP1, is one of three audio formats included in the MPEG-1 standard. It is a deliberately simplified version of MPEG-1 Audio Layer II (MP2), created for applications where lower compression efficiency could be tolerated in return for a less complex algorithm that could be executed with simpler hardware requirements. While supported by most media players, the codec is considered largely obsolete, and replaced by MP2 or MP3.
SBC, or low-complexity subband codec, is an audio subband codec specified by the Bluetooth Special Interest Group (SIG) for the Advanced Audio Distribution Profile (A2DP). SBC is a digital audio encoder and decoder used to transfer data to Bluetooth audio output devices like headphones or loudspeakers. It can also be used on the Internet. It was designed with Bluetooth bandwidth limitations and processing power in mind to obtain a reasonably good audio quality at medium bit rates with low computational complexity. As of A2DP version 1.3, the Low Complexity Subband Coding remains the default codec and its implementation is mandatory for devices supporting that profile, but vendors are free to add their own codecs to match their needs.
aptX is a family of proprietary audio codec compression algorithms owned by Qualcomm, with a heavy emphasis on wireless audio applications.
In signal processing, sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals.
Codec 2 is a low-bitrate speech audio codec that is patent free and open source. Codec 2 compresses speech using sinusoidal coding, a method specialized for human speech. Bit rates of 3200 to 450 bit/s have been successfully created. Codec 2 was designed to be used for amateur radio and other high compression voice applications.