Transmission time

Last updated

In telecommunication networks, the transmission time is the amount of time from the beginning until the end of a message transmission. In the case of a digital message, it is the time from the first bit until the last bit of a message has left the transmitting node. The packet transmission time in seconds can be obtained from the packet size in bit and the bit rate in bit/s as:

Contents

Packet transmission time = Packet size / Bit rate

Example: Assuming 100 Mbit/s Ethernet, and the maximum packet size of 1526 bytes, results in

Maximum packet transmission time = 1526×8 bit / (100 × 106 bit/s) ≈ 122 μs

Propagation delay

The transmission time should not be confused with the propagation delay, which is the time it takes for the first bit to travel from the sender to the receiver (During this time the receiver is unaware that a message is being transmitted). The propagation speed depends on the physical medium of the link (that is, fiber optics, twisted-pair copper wire, etc.) and is in the range of meters/sec for copper wires and for wireless communication, which is equal to the speed of light. The ratio of actual propagation speed to the speed of light is also called the velocity factor of the medium. The propagation delay of a physical link can be calculated by dividing the distance (the length of the medium) in meter by its propagation speed in m/s.

Propagation time = Distance / propagation speed

Example: Ethernet communication over a UTP copper cable with maximum distance of 100 meter between computer and switching node results in:

Maximum link propagation delay ≈ 100 m / (200 000 000 m/s) = 0.5 μs

Packet delivery time

The packet delivery time or latency is the time from when the first bit leaves the transmitter until the last is received. In the case of a physical link, it can be expressed as:

Packet delivery time = Transmission time + Propagation delay

In case of a network connection mediated by several physical links and forwarding nodes, the network delivery time depends on the sum of the delivery times of each link, and also on the packet queuing time (which is varying and depends on the traffic load from other connections) and the processing delay of the forwarding nodes. In wide-area networks, the delivery time is in the order of milliseconds.

Roundtrip time

The round-trip time or ping time is the time from the start of the transmission from the sending node until a response (for example an ACK packet or ping ICMP response) is received at the same node. It is affected by packet delivery time as well as the data processing delay, which depends on the load on the responding node. If the sent data packet as well as the response packet have the same length, the roundtrip time can be expressed as:

Roundtrip time = 2 × Packet delivery time + processing delay

In case of only one physical link, the above expression corresponds to:

Link roundtrip time = 2 × packet transmission time + 2 × propagation delay + processing delay

If the response packet is very short, the link roundtrip time can be expressed as close to:

Link roundtrip time ≈ packet transmission time + 2 × propagation delay + processing delay

Throughput

The network throughput of a connection with flow control, for example a TCP connection, with a certain window size (buffer size), can be expressed as:

Network throughput ≈ Window size / roundtrip time

In case of only one physical link between the sending and transmitting nodes, this corresponds to:

Link throughput ≈ Bitrate × Transmission time / roundtrip time

The message delivery time or latency over a network depends on the message size in bit, and the network throughput or effective data rate in bit/s, as:

Message delivery time = Message size / Network throughput

See also

Related Research Articles

Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

In general terms, throughput is the rate of production or the rate at which something is processed.

Frame Relay Wide area network technology

Frame Relay is a standardized wide area network (WAN) technology that specifies the physical and data link layers of digital telecommunications channels using a packet switching methodology. Originally designed for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the context of many other network interfaces.

Network topology Arrangement of the elements of a communication network

Network topology is the arrangement of the elements of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.

Wormhole flow control, also called wormhole switching or wormhole routing, is a system of simple flow control in computer networking based on known fixed links. It is a subset of flow control methods called Flit-Buffer Flow Control.

Communication channel transmission channel

A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used to convey an information signal, for example a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.

In telecommunications, message switching involves messages routed in their entirety, one hop at a time. It evolved from circuit switching and was the precursor of packet switching.


Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.

Network performance refers to measures of service quality of a network as seen by the customer.

In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. It provides a mechanism for the receiver to control the transmission speed, so that the receiving node is not overwhelmed with data from transmitting node. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node.

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

Computer network Network that allows computers to share resources and communicate with each other

A computer network is a set of computers sharing resources located on or provided by network nodes. The computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies, based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.

E-UTRA

E-UTRA is the air interface of 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) upgrade path for mobile networks. It is an acronym for Evolved Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access, also referred to as the 3GPP work item on the Long Term Evolution (LTE) also known as the Evolved Universal Terrestrial Radio Access (E-UTRA) in early drafts of the 3GPP LTE specification. E-UTRAN is the initialism of Evolved UMTS Terrestrial Radio Access Network and is the combination of E-UTRA, user equipment (UE), and E-UTRAN Node B or Evolved Node B (eNodeB).

In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.

In capital markets, low latency is the use of algorithmic trading to react to market events faster than the competition to increase profitability of trades. For example, when executing arbitrage strategies the opportunity to "arb" the market may only present itself for a few milliseconds before parity is achieved. To demonstrate the value that clients put on latency, in 2007 a large global investment bank has stated that every millisecond lost results in $100m per annum in lost opportunity.

ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.

ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.

Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.

References