In telecommunications, round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent plus the amount of time it takes for acknowledgement of that signal having been received. This time delay includes propagation times for the paths between the two communication endpoints. [1] In the context of computer networks, the signal is typically a data packet. RTT is commonly used interchangeably with ping time, which can be determined with the ping command. However, ping time may differ from experienced RTT with other protocols since the payload and priority associated with ICMP messages used by ping may differ from that of other traffic.
End-to-end delay is the length of time it takes for a signal to travel in one direction and is often approximated as half the RTT.
RTT is a measure of the amount of time taken for an entire message to be sent to a destination and for a reply to be sent back to the sender. The time to send the message to the destination in its entirety is known as the network latency, and thus RTT is twice the latency in the network plus a processing delay at the destination. The other sources of delay in a network that make up the network latency are processing delay in transmission, propagation time, transmission time and queueing time. Propagation time is dependent on distance. Transmission time for a message is proportional to the message size divided by the bandwidth. Thus higher bandwidth networks will have lower transmission time, but the propagation time will remain unchanged, and so RTT does fall with increased bandwidth, but the delay increasingly represents propagation time. [2] : 90, 91
Networks with both high bandwidth and a high RTT (and thus high bandwidth-delay product) can have large amounts of data in transit at any given time. Such long fat networks require a special protocol design. [3] One example is the TCP window scale option.
The RTT was originally estimated in TCP by:
where is constant weighting factor (). [4] Choosing a value for close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for close to 0 makes the weighted average respond to changes in delay very quickly. This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well. Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation.
Accurate round-trip time measurements over Wi-Fi using IEEE 802.11mc are the basis for the Wi-Fi positioning system.
Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.
The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.
Network throughput refers to the rate of message delivery over a communication channel in a communication network, such as Ethernet or packet radio. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in packets per second or data packets per time slot.
Enhanced Interior Gateway Routing Protocol (EIGRP) is an advanced distance-vector routing protocol that is used on a computer network for automating routing decisions and configuration. The protocol was designed by Cisco Systems as a proprietary protocol, available only on Cisco routers. In 2013, Cisco permitted other vendors to freely implement a limited version of EIGRP with some of its associated features such as High Availability (HA), while withholding other EIGRP features such as EIGRP stub, needed for DMVPN and large-scale campus deployment. Information needed for implementation was published with informational status as RFC 7868 in 2016, which did not advance to Internet Standards Track level, and allowed Cisco to retain control of the EIGRP protocol.
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
FAST TCP is a TCP congestion avoidance algorithm especially targeted at long-distance, high latency links, developed at the Netlab, California Institute of Technology and now being commercialized by FastSoft. FastSoft was acquired by Akamai Technologies in 2012.
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.
End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured.
Network performance refers to measures of service quality of a network as seen by the customer.
Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and a congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.
TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.
Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.
Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness.
In data communications, the bandwidth-delay product is the product of a data link's capacity and its round-trip delay time. The result, an amount of data measured in bits, is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged. The bandwidth-delay product was originally proposed as a rule of thumb for sizing router buffers in conjunction with congestion avoidance algorithm random early detection (RED).
In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.
In telecommunication networks, the transmission time is the amount of time from the beginning until the end of a message transmission. In the case of a digital message, it is the time from the first bit until the last bit of a message has left the transmitting node. The packet transmission time in seconds can be obtained from the packet size in bit and the bit rate in bit/s as:
Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.
Infineta Systems was a company that made WAN optimization products for high performance, latency-sensitive network applications. The company advertised that it allowed application data rate to exceed the nominal data rate of the link. Infineta Systems ceased operations by February 2013, a liquidator was appointed, and its products will no longer be manufactured, sold or distributed.
Ultra-wideband impulse radio ranging is a wireless positioning technology based on IEEE 802.15.4z standard, which is a wireless communication protocol introduced by IEEE, for systems operating in unlicensed spectrum, equipped with extremely large bandwidth transceivers. UWB enables very accurate ranging without introducing significant interference with narrowband systems. To achieve these stringent requirements, UWB-IR systems exploit the available bandwidth that they support, which guarantees very accurate timing and robustness against multipath, especially in indoor environments. The available bandwidth also enables UWB systems to spread the signal power over a large spectrum, avoiding narrowband interference.