Transmission Time Interval

Last updated

TTI, Transmission Time Interval, is a parameter in UMTS (and other digital telecommunication networks) related to encapsulation of data from higher layers into frames for transmission on the radio link layer. TTI refers to the duration of a transmission on the radio link. The TTI is related to the size of the data blocks passed from the higher network layers to the radio link layer.

To combat errors due to fading and interference on the radio link, data is divided at the transmitter into blocks and then the bits within a block are encoded and interleaved. The length of time required to transmit one such block determines the TTI. At the receiver all bits from a given block must be received before they can be deinterleaved and decoded. Having decoded the bits the receiver can estimate the bit error rate (BER). And because the shortest decodable transmission is one TTI the shortest period over which BER can be estimated is also one TTI. Thus in networks with link adaptation techniques based on the estimated BER the shortest interval between reports of the estimated performance, which are used to adapt to the conditions on the link, is at least one TTI. In order to be able to adapt quickly to the changing conditions in the radio link a communications system must have shorter TTIs. In order to benefit more from the effect of interleaving and to increase the efficiency of error-correction and compression techniques a system must, in general, have longer TTIs. These two contradicting requirements determine the choice of the TTI.

In UMTS Release '99 the shortest TTI is 10 ms and can be 20 ms, 40 ms, or 80 ms. In UMTS Release-5 the TTI for HSDPA is reduced to 2ms. This provides the advantage of faster response to link conditions and allows the system to quickly schedule transmissions to mobiles which temporarily enjoy better than usual link conditions. As a result the system most of the time transmits data over links which are better than the average conditions, because of this the bit rates in the system most of the time are higher than what the average conditions would allow. This leads to increase in system capacity.

In 1xEV-DO technology, a slot, which is not quite the same thing as the TTI, but which still fulfills a somewhat similar function, is 1.667 ms. In 1xEV-DV it has a variable length of 1.25 ms, 2.5 ms, 5 ms, and 10 ms.

Related Research Articles

<span class="mw-page-title-main">Code-division multiple access</span> Channel access method used by various radio communication technologies

Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies. To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme.

<span class="mw-page-title-main">Enhanced Data rates for GSM Evolution</span> Digital mobile phone technology

Enhanced Data rates for GSM Evolution (EDGE) also known as Enhanced GPRS (EGPRS), IMT Single Carrier (IMT-SC), or Enhanced Data rates for Global Evolution) is a digital mobile phone technology that allows improved data transmission rates as a backward-compatible extension of GSM. EDGE is considered a pre-3G radio technology and is part of ITU's 3G definition. EDGE was deployed on GSM networks beginning in 2003 – initially by Cingular in the United States.

<span class="mw-page-title-main">Error detection and correction</span> Techniques that enable reliable delivery of digital data over unreliable communication channels

In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases.

In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission and a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/5G mobile communications.

In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.

<span class="mw-page-title-main">Digital Audio Broadcasting</span> Digital radio standard

Digital Audio Broadcasting (DAB) is a digital radio standard for broadcasting digital audio radio services in many countries around the world, defined and promoted by the WorldDAB forum. The standard is dominant in Europe and is also used in parts of Africa, Asia, and Australia; other worldwide terrestrial digital radio standards include HD Radio, ISDB-Tb, DRM, and the related DMB.

<span class="mw-page-title-main">Fading</span>

In wireless communications, fading is variation of the attenuation of a signal with various variables. These variables include time, geographical position, and radio frequency. Fading is often modeled as a random process. A fading channel is a communication channel that experiences fading. In wireless systems, fading may either be due to multipath propagation, referred to as multipath-induced fading, weather, or shadowing from obstacles affecting the wave propagation, sometimes referred to as shadow fading.

The data link layer, or layer 2, is the second layer of the seven-layer OSI model of computer networking. This layer is the protocol layer that transfers data between nodes on a network segment across the physical layer. The data link layer provides the functional and procedural means to transfer data between network entities and may also provide the means to detect and possibly correct errors that can occur in the physical layer.

cdmaOne First CDMA-based digital cellular technology

Interim Standard 95 (IS-95) was the first ever CDMA-based digital cellular technology. It was developed by Qualcomm and later adopted as a standard by the Telecommunications Industry Association in TIA/EIA/IS-95 release published in 1995. The proprietary name for IS-95 is cdmaOne.

Radio Data System (RDS) is a communications protocol standard for embedding small amounts of digital information in conventional FM radio broadcasts. RDS standardizes several types of information transmitted, including time, station identification and program information.

DVB-T, short for Digital Video Broadcasting — Terrestrial, is the DVB European-based consortium standard for the broadcast transmission of digital terrestrial television that was first published in 1997 and first broadcast in Singapore in February, 1998. This system transmits compressed digital audio, digital video and other data in an MPEG transport stream, using coded orthogonal frequency-division multiplexing modulation. It is also the format widely used worldwide for Electronic News Gathering for transmission of video and audio from a mobile newsgathering vehicle to a central receive point. It is also used in the US by Amateur television operators.

A rake receiver is a radio receiver designed to counter the effects of multipath fading. It does this by using several "sub-receivers" called fingers, that is, several correlators each assigned to a different multipath component. Each finger independently decodes a single multipath component; at a later stage the contribution of all fingers are combined in order to make the most use of the different transmission characteristics of each transmission path. This could very well result in higher signal-to-noise ratio (or Eb/N0) in a multipath environment than in a "clean" environment.

Network performance refers to measures of service quality of a network as seen by the customer.

Olivia MFSK is an amateur radioteletype protocol, using multiple frequency-shift keying (MFSK) and designed to work in difficult conditions on shortwave bands. The signal can be accurately received even if the surrounding noise is 10 dB stronger. It is commonly used by amateur radio operators to reliably transmit ASCII characters over noisy channels using the high frequency (3–30 MHz) spectrum. The effective data rate of the Olivia MFSK protocol is 150 characters/minute.

Link adaptation, comprising adaptive coding and modulation (ACM) and others, is a term used in wireless communications to denote the matching of the modulation, coding and other signal and protocol parameters to the conditions on the radio link. For example, WiMAX uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. The process of link adaptation is a dynamic one and the signal and protocol parameters change as the radio link conditions change—for example in High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS) this can take place every 2 ms.

Hybrid automatic repeat request is a combination of high-rate forward error correction (FEC) and automatic repeat request (ARQ) error-control. In standard ARQ, redundant bits are added to data to be transmitted using an error-detecting (ED) code such as a cyclic redundancy check (CRC). Receivers detecting a corrupted message will request a new message from the sender. In Hybrid ARQ, the original data is encoded with an FEC code, and the parity bits are either immediately sent along with the message or only transmitted upon request when a receiver detects an erroneous message. The ED code may be omitted when a code is used that can perform both forward error correction (FEC) in addition to error detection, such as a Reed–Solomon code. The FEC code is chosen to correct an expected subset of all errors that may occur, while the ARQ method is used as a fall-back to correct errors that are uncorrectable using only the redundancy sent in the initial transmission. As a result, hybrid ARQ performs better than ordinary ARQ in poor signal conditions, but in its simplest form this comes at the expense of significantly lower throughput in good signal conditions. There is typically a signal quality cross-over point below which simple hybrid ARQ is better, and above which basic ARQ is better.

<span class="mw-page-title-main">E-UTRA</span> 3GPP interface

E-UTRA is the air interface of 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) upgrade path for mobile networks. It is an acronym for Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access, also referred to as the 3GPP work item on the Long Term Evolution (LTE) also known as the Evolved Universal Terrestrial Radio Access (E-UTRA) in early drafts of the 3GPP LTE specification. E-UTRAN is the initialism of Evolved UMTS Terrestrial Radio Access Network and is the combination of E-UTRA, user equipment (UE), and E-UTRAN Node B or Evolved Node B (eNodeB).

In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is the sender encodes the message with redundant information in the form of an ECC. The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without retransmission. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.

<span class="mw-page-title-main">High Speed Packet Access</span> Communications protocols

High Speed Packet Access (HSPA) is an amalgamation of two mobile protocols, High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA), that extends and improves the performance of existing 3G mobile telecommunication networks using the WCDMA protocols. A further improved 3GPP standard, Evolved High Speed Packet Access, was released late in 2008 with subsequent worldwide adoption beginning in 2010. The newer standard allows bit-rates to reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink. However, these speeds are rarely achieved in practice.

<span class="mw-page-title-main">Evolved High Speed Packet Access</span> Technical standard

Evolved High Speed Packet Access, HSPA+, HSPA (Plus) or HSPAP, is a technical standard for wireless broadband telecommunication. It is the second phase of HSPA which has been introduced in 3GPP release 7 and being further improved in later 3GPP releases. HSPA+ can achieve data rates of up to 42.2 Mbit/s. It introduces antenna array technologies such as beamforming and multiple-input multiple-output communications (MIMO). Beam forming focuses the transmitted power of an antenna in a beam towards the user's direction. MIMO uses multiple antennas at the sending and receiving side. Further releases of the standard have introduced dual carrier operation, i.e. the simultaneous use of two 5 MHz carriers. HSPA+ is an evolution of HSPA that upgrades the existing 3G network and provides a method for telecom operators to migrate towards 4G speeds that are more comparable to the initially available speeds of newer LTE networks without deploying a new radio interface. HSPA+ should not be confused with LTE though, which uses an air interface based on orthogonal frequency-division modulation and multiple access.