Network performance refers to measures of service quality of a network as seen by the customer.
There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled and simulated instead of measured; one example of this is using state transition diagrams to model queuing performance or to use a Network Simulator.
The following measures are often considered important:
The available channel bandwidth and achievable signal-to-noise ratio determine the maximum possible throughput. It is not generally possible to send more data than dictated by the Shannon-Hartley Theorem.
Throughput is the number of messages successfully delivered per unit time. Throughput is controlled by available bandwidth, as well as the available signal-to-noise ratio and hardware limitations. Throughput for the purpose of this article will be understood to be measured from the arrival of the first bit of data at the receiver, to decouple the concept of throughput from the concept of latency. For discussions of this type, the terms 'throughput' and 'bandwidth' are often used interchangeably.
The Time Window is the period over which the throughput is measured. The choice of an appropriate time window will often dominate calculations of throughput, and whether latency is taken into account or not will determine whether the latency affects the throughput or not.
The speed of light imposes a minimum propagation time on all electromagnetic signals. It is not possible to reduce the latency below
where s is the distance and cm is the speed of light in the medium (roughly 200,000 km/s for most fiber or electrical media, depending on their velocity factor). This approximately means an additional millisecond round-trip delay (RTT) per 100 km (or 62 miles) of distance between hosts.
Other delays also occur in intermediate nodes. In packet switched networks delays can occur due to queueing.
Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of periodic signals. Jitter is a significant, and usually undesired, factor in the design of almost all communications links (e.g., USB, PCI-e, SATA, OC-48). In clock recovery applications it is called timing jitter. [1]
In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.
The bit error rate or bit error ratio (BER) is the number of bit errors divided by the total number of transferred bits during a studied time interval. BER is a unitless performance measure, often expressed as a percentage.
The bit error probabilitype is the expectation value of the BER. The BER can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors.
All of the factors above, coupled with user requirements and user perceptions, play a role in determining the perceived 'fastness' or utility, of a network connection. The relationship between throughput, latency, and user experience is most aptly understood in the context of a shared network medium, and as a scheduling problem.
For some systems, latency and throughput are coupled entities. In TCP/IP, latency can also directly affect throughput. In TCP connections, the large bandwidth-delay product of high latency connections, combined with relatively small TCP window sizes on many devices, effectively causes the throughput of a high latency connection to drop sharply with latency. This can be remedied with various techniques, such as increasing the TCP congestion window size, or more drastic solutions, such as packet coalescing, TCP acceleration, and forward error correction, all of which are commonly used for high latency satellite links.
TCP acceleration converts the TCP packets into a stream that is similar to UDP. Because of this, the TCP acceleration software must provide its own mechanisms to ensure the reliability of the link, taking the latency and bandwidth of the link into account, and both ends of the high latency link must support the method used.
In the Media Access Control (MAC) layer, performance issues such as throughput and end-to-end delay are also addressed.
Many systems can be characterized as dominated either by throughput limitations or by latency limitations in terms of end-user utility or experience. In some cases, hard limits such as the speed of light present unique problems to such systems and nothing can be done to correct this. Other systems allow for significant balancing and optimization for best user experience.
A telecom satellite in geosynchronous orbit imposes a path length of at least 71000 km between transmitter and receiver. [2] which means a minimum delay between message request and message receipt, or latency of 473 ms. This delay can be very noticeable and affects satellite phone service regardless of available throughput capacity.
These long path length considerations are exacerbated when communicating with space probes and other long-range targets beyond Earth's atmosphere. The Deep Space Network implemented by NASA is one such system that must cope with these problems. Largely latency driven, the GAO has criticized the current architecture. [3] Several different methods have been proposed to handle the intermittent connectivity and long delays between packets, such as delay-tolerant networking. [4]
At interstellar distances, the difficulties in designing radio systems that can achieve any throughput at all are massive. In these cases, maintaining communication is a bigger issue than how long that communication takes.
Transportation is concerned almost entirely with throughput, which is why physical deliveries of backup tape archives are still largely done by vehicle.
Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.
Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
Network throughput refers to the rate of message delivery over a communication channel, such as Ethernet or packet radio, in a communication network. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot.
In electronics and telecommunications, jitter is the deviation from true periodicity of a presumably periodic signal, often in relation to a reference clock signal. In clock recovery applications it is called timing jitter. Jitter is a significant, and usually undesired, factor in the design of almost all communications links.
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
FAST TCP is a TCP congestion avoidance algorithm especially targeted at long-distance, high latency links, developed at the Netlab, California Institute of Technology and now being commercialized by FastSoft. FastSoft was acquired by Akamai Technologies in 2012.
Satellite Internet access is Internet access provided through communication satellites; if it can sustain high speeds, it is termed satellite broadband. Modern consumer grade satellite Internet service is typically provided to individual users through geostationary satellites that can offer relatively high data speeds, with newer satellites using the Ku band to achieve downstream data speeds up to 506 Mbit/s. In addition, new satellite internet constellations are being developed in low-earth orbit to enable low-latency internet access from space.
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.
ZMODEM is an inline file transfer protocol developed by Chuck Forsberg in 1986, in a project funded by Telenet in order to improve file transfers on their X.25 network. In addition to dramatically improved performance compared to older protocols, ZMODEM offered restartable transfers, auto-start by the sender, an expanded 32-bit CRC, and control character quoting supporting 8-bit clean transfers, allowing it to be used on networks that would not pass control characters.
Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and a congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.
TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.
Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.
Protocol spoofing is used in data communications to improve performance in situations where an existing protocol is inadequate, for example due to long delays or high error rates.
Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effectively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).
In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.
ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.
Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.
ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.