UDP-based Data Transfer Protocol

Last updated
UDT
Developer(s) Yunhong Gu
Stable release
4.11 / February 23, 2013 (2013-02-23)
Repository sourceforge.net/projects/udt/
Written in C++
Type Protocol (computing)
License BSD License
Website udt.sourceforge.net

UDP-based Data Transfer Protocol (UDT), is a high-performance data transfer protocol designed for transferring large volumetric datasets over high-speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol.

Contents

Initial versions were developed and tested on very high-speed networks (1 Gbit/s, 10 Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls using UDP.

UDT has an open source implementation which can be found on SourceForge. It is one of the most popular solutions for supporting high-speed data transfer and is part of many research projects and commercial products.

Background

UDT was developed by Yunhong Gu [1] during his PhD studies at the National Center for Data Mining (NCDM) of University of Illinois at Chicago in the laboratory of Dr. Robert Grossman. Dr. Gu continues to maintain and improve the protocol after graduation.

The UDT project started in 2001, when inexpensive optical networks became popular and triggered a wider awareness of TCP efficiency problems over high-speed wide area networks. The first version of UDT, also known as SABUL (Simple Available Bandwidth Utility Library), was designed to support bulk data transfer for scientific data movement over private networks. SABUL used UDP for data transfer and a separate TCP connection for control messages.

In October, 2003, the NCDM achieved a 6.8 gigabits per second transfer from Chicago, United States to Amsterdam, Netherlands. During the 30-minute test they transmitted approximately 1.4 terabytes of data.

SABUL was later renamed to UDT starting with version 2.0, which was released in 2004. UDT2 removed the TCP control connection in SABUL and used UDP for both data and control information. UDT2 also introduced a new congestion control algorithm that allowed the protocol to run "fairly and friendly" with concurrent UDT and TCP flows.

UDT3 (2006) extended the usage of the protocol to the commodity Internet. Congestion control was tuned to support relatively low bandwidth as well. UDT3 also significantly reduced the use of system resources (CPU and memory). Additionally, UDT3 allows users to easily define and install their own congestion control algorithms.

UDT4 (2007) introduced several new features to better support high concurrency and firewall traversing. UDT4 allowed multiple UDT connections to bind to the same UDP port and it also supported rendezvous connection setup for easier UDP hole punching.

A fifth version of the protocol is currently in the planning stage. Possible features include the ability to support multiple independent sessions over a single connection.

Moreover, since the absence of a security feature for UDT has been an issue with its initial implementation in a commercial environment, Bernardo (2011) has developed a security architecture for UDT as part of his PhD studies. This architecture however is undergoing enhancement to support UDT in various network environments (i.e., optical networks).

Protocol architecture

UDT is built on top of User Datagram Protocol (UDP), adding congestion control and reliability control mechanisms. UDT is an application level, connection oriented, duplex protocol that supports both reliable data streaming and partial reliable messaging.

Acknowledging

UDT uses periodic acknowledgments (ACK) to confirm packet delivery, while negative ACKs (loss reports) are used to report packet loss. Periodic ACKs help to reduce control traffic on the reverse path when the data transfer speed is high, because in these situations, the number of ACKs is proportional to time, rather than the number of data packets.

AIMD with decreasing increase

UDT uses an AIMD (additive increase multiplicative decrease) style congestion control algorithm. The increase parameter is inversely proportional to the available bandwidth (estimated using the packet pair technique), thus UDT can probe high bandwidth rapidly and can slow down for better stability when it approaches maximum bandwidth. The decrease factor is a random number between 1/8 and 1/2. This helps reduce the negative impact of loss synchronization.

In UDT, packet transmission is limited by both rate control and window control. The sending rate is updated by the AIMD algorithm described above. The congestion window, as a secondary control mechanism, is set according to the data arrival rate on the receiver side.

Configurable congestion control

The UDT implementation exposes a set of variables related to congestion control in a C++ class and allows users to define a set of callback functions to manipulate these variables. Thus, users can redefine the control algorithm by overriding some or all of these callback functions. Most TCP control algorithms can be implemented using this feature with fewer than 100 lines of code.

Rendezvous connection setup

Beside the traditional client/server connection setup (AKA caller/listener, where a listener waits for connection and potentially accepts multiple connecting callers), UDT supports also a new rendezvous connection setup mode. In this mode both sides listen on their port and connect to the peer simultaneously, that is, they both connect to one another. Therefore, both parties must use the same port for connection, and both parties are role-equivalent (in contrast to listener/caller roles in traditional setup). Rendezvous is widely used for firewall traversing when both peers are behind firewalls.

Use scenarios

UDT is widely used in high-performance computing to support high-speed data transfer over optical networks. For example, GridFTP, a popular data transfer tool in grid computing, has UDT available as a data transfer protocol.

Over the commodity Internet, UDT has been used in many commercial products for fast file transfer over wide area networks.

Because UDT is purely based on UDP, it has also been used in many situations where TCP is at a disadvantage to UDP. These scenarios include peer-to-peer applications, video and audio communication, and many others.

Evaluation of feasible security mechanisms

UDT is considered a state-of-the-art protocol, addressing infrastructure requirements for transmitting data in high-speed networks. Its development, however, creates new vulnerabilities because like many other protocols, it relies solely on the existing security mechanisms for current protocols such as the Transmission Control Protocol (TCP) and UDP.

Research conducted by Dr. Danilo Valeros Bernardo of the University of Technology Sydney, a member of the Australian Technology Network focusing on practical experiments on UDT using their proposed security mechanisms and exploring the use of other existing security mechanisms used on TCP/UDP for UDT, gained interesting reviews in various network and security scientific communities.

To analyze the security mechanisms, they carry out a formal proof of correctness to assist them in determining their applicability by using protocol composition logic (PCL). This approach is modular, comprising[ clarification needed ] a separate proof of each protocol section and providing insight into the network environment in which each section can be reliably employed. Moreover, the proof holds for a variety of failure recovery strategies and other implementation and configuration options. They derive their technique from the PCL on TLS and Kerberos in the literature. They work on developing and validating its security architecture by using rewrite systems and automata.

The result of their work, which is first in the literature, is a more robust theoretical and practical representation of a security architecture of UDT, viable to work with other high-speed network protocols.

Derivative works

UDT project has been a base for SRT project, which uses the transmission reliability for live video streaming over public internet.

Awards

The UDT team has won the prestigious Bandwidth Challenge three times during the annual ACM/IEEE Supercomputing Conference, the world's premier conference for high-performance computing, networking, storage, and analysis. [2] [3] [4]

At SC06 (Tampa, FL), the team transferred an astronomy dataset at 8 Gbit/s disk-to-disk from Chicago, IL to Tampa, FL using UDT. At SC08 (Austin, TX), the team demonstrated the use of UDT in a complex high-speed data transfer involving various distributed applications over a 120-node system, across four data centers in Baltimore, Chicago (2), and San Diego. At SC09 (Portland, OR), a collaborative team from NCDM, Naval Research Lab, and iCAIR showcased UDT-powered wide area data intensive cloud computing applications.

See also

Literature

Related Research Articles

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

Transport layer Layer in the OSI and TCP/IP models providing host-to-host communication services for applications

In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide host-to-host communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.

Explicit Congestion Notification (ECN) is an extension to the Internet Protocol and to the Transmission Control Protocol and is defined in RFC 3168 (2001). ECN allows end-to-end notification of network congestion without dropping packets. ECN is an optional feature that may be used between two ECN-enabled endpoints when the underlying network infrastructure also supports it.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent the link from being broken.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window, to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

Nagle's algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It was defined by John Nagle while working for Ford Aerospace. It was published in 1984 as a Request for Comments (RFC) with title Congestion Control in IP/TCP Internetworks in RFC 896.

Network address translation traversal is a computer networking technique of establishing and maintaining Internet protocol connections across gateways that implement network address translation (NAT).

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.

The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in RFC 1323 which deals with long fat networks (LFNs).

Type of Transmission Control Protocol which is designed to provide much higher throughput and scalability.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

Micro Transport Protocol or μTP is an open UDP-based variant of the BitTorrent peer-to-peer file sharing protocol intended to mitigate poor latency and other congestion control problems found in conventional BitTorrent over TCP, while providing reliable, ordered delivery.

Bufferbloat is a cause of high latency in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), online gaming, and even ordinary web surfing.

QUIC is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge, Firefox, and Safari support it, even if not enabled by default.

NACK-Oriented Reliable Multicast (NORM) is a transport layer Internet protocol designed to provide reliable transport in multicast groups in data networks. It is formally defined by the Internet Engineering Task Force (IETF) in Request for Comments (RFC) 5740, which was published in November 2009.

Fast and Secure Protocol

The Fast Adaptive and Secure Protocol (FASP) is a proprietary data transfer protocol. FASP is a network-optimized network protocol developed by Aspera, owned by IBM. The associated client/server software packages are also commonly called Aspera. The technology is patented under US Patent #20090063698, Method and system for aggregate bandwidth control.

Secure Reliable Transport (SRT) is an open source video transport protocol that utilises the UDP transport protocol.

References