The Media Delivery Index (MDI) is a set of measures that can be used to monitor both the quality of a delivered video stream as well as to show system margin for IPTV systems by providing an accurate measurement of jitter and delay at network level (Internet Protocol, IP), which are the main causes for quality loss. Identifying and quantizing such problems in this kind of networks is key to maintaining high quality video delivery and providing indications that warn system operators with enough advance notice to allow corrective action. [1]
The Media Delivery Index is typically displayed as two numbers separated by a colon: the Delay Factor (DF) and the Media Loss Rate (MLR). [2]
The Media Delivery Index (MDI) may be able to identify problems caused by:
If packets are delayed by the network, some packets arrive in bursts with interpacket delays shorter than when they were transmitted, while others are delayed such that they arrive with greater delay between packets than when they were transmitted from the source (see figure below). This time difference between when a packet actually arrives and the expected arrival time is defined as packet jitter or time distortion. [3]
A receiver displaying the video at its nominal rate must accommodate the varying input stream arrival times by buffering the data arriving early and assuring that there is enough already stored data to face the possible delays in the received data (because of this the buffer is filled before displaying).
Similarly, the network infrastructure (switches, routers,…) uses buffers at each node to avoid packet loss. These buffers must be sized appropriately to handle network congestion.
Packet delays can be caused by multiple facts, among which there are the way traffic is routed through the infrastructure and possible differences between link speeds in the infrastructure.
Moreover, some methods for delivering Quality of Service (QOS) using packet metering algorithms may intentionally hold back packets to meet the quality specifications in the transmission. [4] [5]
The effects of all these facts on the amount of packets received by a specific point in the network can be seen in the next graphics:
Packets may be lost due to buffer overflows or environmental electrical noise that creates corrupted packets. Even small packet loss rates result in a poor video display.
Packet delay variation and packet loss have been shown to be the key characteristics in determining whether a network can transport good quality video. These features are represented as the Delay Factor (DF) and the Media Loss Rate (MLR), and they are combined to produce the Media Delivery Index (MDI), which is displayed as:
The different components of the Media Delivery Index (MDI) are explained in this section.
The Delay Factor is a temporal value given in milliseconds that indicates how much time is required to drain the virtual buffer at the concrete network node and at a specific time. In other words, it is a time value indicating how many milliseconds’ worth of data the buffers must be able to contain in order to eliminate time distortions (jitter). [3]
It is computed as packets arrive at the node and is displayed/recorded at regular intervals (typically one second).
It is calculated as follows:
1. At every packet arrival, the difference between the bytes received and the bytes drained is calculated. This determines the MDI virtual buffer depth:
2. Over a time interval, the difference between the minimum and maximum values of Δ is taken and then divided by the media rate:
Maximum acceptable DF: [5] 9–50 ms
The Media Loss Rate is the number of media packets lost over a certain time interval (typically one second). [3]
It is computed by subtracting the number of media packets received during an interval from the number of media packets expected during that interval and scaling the value to the chosen time period (typically one second):
Maximum acceptable channel zapping MLR: [5] 0
Maximum acceptable average MLR:
- SDTV: 0.004
- VOD: 0.004
- HDTV: 0.0005
It must be said that the maximum acceptable MLR depends on the implementation. For channel zapping, a channel is generally viewed for a brief period, so one would be bothered if any packet loss occurred. For this case the maximum acceptable MLR is 0, as stated before, because any greater a value would mean a loss of one or more packets in a small viewing timeframe (after the zap time).
Generally, the Media Delivery Index (MDI) can be used to install, modify or evaluate a video network following the next steps: [3]
Given these results, measures must be taken to provide solutions to the problems found in the network. Some of them are: redefining system specifications, modifying the network components in order to meet the expected quality requirements (or number of users), etc.
Other parameters may also be desired in order to troubleshoot concerns identified with the MDI and to aid in system configuration and monitoring. Some of them are: [3]
Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
In electronics and telecommunications, jitter is the deviation from true periodicity of a presumably periodic signal, often in relation to a reference clock signal. In clock recovery applications it is called timing jitter. Jitter is a significant, and usually undesired, factor in the design of almost all communications links.
Differentiated services or DiffServ is a computer networking architecture that specifies a mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing best-effort service to non-critical services such as web traffic or file transfers.
IEEE 802.11e-2005 or 802.11e is an approved amendment to the IEEE 802.11 standard that defines a set of quality of service (QoS) enhancements for wireless LAN applications through modifications to the media access control (MAC) layer. The standard is considered of critical importance for delay-sensitive applications, such as Voice over Wireless LAN and streaming multimedia. The amendment has been incorporated into the published IEEE 802.11-2007 standard.
The RTP Control Protocol (RTCP) is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data but does not transport any media data itself.
The leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates. It may also be used to check conformance or limit to an average rate alone, i.e. remove any variation from the average.
In computer networking and telecommunications, TDM over IP (TDMoIP) is the emulation of time-division multiplexing (TDM) over a packet-switched network (PSN). TDM refers to a T1, E1, T3 or E3 signal, while the PSN is based either on IP or MPLS or on raw Ethernet. A related technology is circuit emulation, which enables transport of TDM traffic over cell-based (ATM) networks.
Network performance refers to measures of service quality of a network as seen by the customer.
A long-tailed or heavy-tailed distribution is one that assigns relatively high probabilities to regions far from the mean or median. A more formal mathematical definition is given below. In the context of teletraffic engineering a number of quantities of interest have been shown to have a long-tailed distribution. For example, if we consider the sizes of files transferred from a web server, then, to a good degree of accuracy, the distribution is heavy-tailed, that is, there are a large number of small files transferred but, crucially, the number of very large files transferred remains a major component of the volume downloaded.
If a network service wishes to use a broadband network to transport a particular kind of traffic, it must first inform the network about what kind of traffic is to be transported, and the performance requirements of that traffic. The application presents this information to the network in the form of a traffic contract.
Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.
Quality of experience (QoE) is a measure of the delight or annoyance of a customer's experiences with a service. QoE focuses on the entire service experience; it is a holistic concept, similar to the field of user experience, but with its roots in telecommunication. QoE is an emerging multidisciplinary field based on social psychology, cognitive science, economics, and engineering science, focused on understanding overall human quality requirements.
Capacity management's goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effectively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.
In computer networking, packet delay variation (PDV) is the difference in end-to-end one-way delay between selected packets in a flow with any lost packets being ignored. The effect is sometimes referred to as packet jitter, although the definition is an imprecise fit.
The zap time is the total duration of time from which the viewer changes the channel using a remote control to the point that the picture of the new channel is displayed. This includes the corresponding audio. These delays exist in all television systems, but they are more pronounced in digital television and systems that use the internet such as IPTV. Human interaction with the system is completely ignored in these measurements, so zap time is not the same as channel surfing.
ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.
Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.
ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.
Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.