Internet traffic engineering

Last updated

Internet traffic engineering is defined as that aspect of Internet network engineering dealing with the issue of performance evaluation and performance optimization of operational IP networks. Traffic engineering encompasses the application of technology and scientific principles to the measurement, characterization, modeling, and control of Internet traffic [RFC-2702, AWD2].

Internet Global system of connected computer networks

The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing. Some publications no longer capitalize "internet".

Engineering applied science

Engineering is the application of knowledge in the form of science, mathematics, and empirical evidence, to the innovation, design, construction, operation and maintenance of structures, machines, materials, devices, systems, processes, and organizations. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering.

Enhancing the performance of an operational network, at both traffic and resource levels, are major objectives of Internet engineering. This is accomplished by addressing traffic performance requirements, while utilizing network economically and reliably. Traffic oriented performance includes packet transfer delay, packet delay variation, packet loss, and throughput.

Computer network collection of autonomous computers interconnected by a single technology

A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi.

In computer networking, packet delay variation (PDV) is the difference in end-to-end one-way delay between selected packets in a flow with any lost packets being ignored. The effect is sometimes referred to as packet jitter, although the definition is an imprecise fit.

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

An important objective of Internet traffic engineering is to facilitate reliable network operations [RFC-2702]. This can be done by providing mechanisms that network integrity and by embracing policies emphasizing survivability. This results in a minimization of the network to service outages arising from errors, faults and failures occurring within the infrastructure.

The Internet exists in order to transfer information from nodes to destination nodes. Accordingly, one of the most crucial functions performed by the Internet is the routing of traffic ingress nodes to egress nodes.

Ultimately, it is the performance of the network as seen by network services that is truly paramount. This crucial function should be considered throughout the development of engineering mechanisms and policies. The characteristics visible to end users are the emergent properties of the network, which are characteristics of the network when viewed as a whole. A goal of the service provider, therefore, is to enhance the properties of the network while taking economic considerations into account.

The importance of the above observation regarding the properties of networks is that special care must be taken when choosing network performance metrics to optimize. Optimizing the wrong metrics may achieve certain local objectives, but may have repercussions elsewhere.

Related Research Articles

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

Router (computing) Device that forwards data packets between computer networks, creating an overlay internetwork

A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node.

Routing is the process of selecting a path for traffic in a network or between or across multiple networks. Broadly, routing is performed in many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), and computer networks, such as the Internet.

In computer networking, the User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network. Prior communications are not required in order to set up communication channels or data paths.

In computer networking a routing table, or routing information base (RIB), is a data table stored in a router or a networked computer that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it. The construction of routing tables is the primary goal of routing protocols. Static routes are entries made in a routing table by non-automatic means and which are fixed rather than being the result of some network topology "discovery" procedure.

Differentiated services or DiffServ is a computer networking architecture that specifies a simple and scalable mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing simple best-effort service to non-critical services such as web traffic or file transfers.

The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, application-specific features reside in the communicating end nodes of the network, rather than in intermediary nodes, such as gateways and routers, that exist to establish the network.

Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

6to4 is an Internet transition mechanism for migrating from Internet Protocol version 4 (IPv4) to version 6 (IPv6), a system that allows IPv6 packets to be transmitted over an IPv4 network without the need to configure explicit tunnels. Special relay servers are also in place that allow 6to4 networks to communicate with native IPv6 networks.

RFC 2638 from the IETF defines the entity of the Bandwidth Broker (BB) in the framework of differentiated services (DiffServ). According to RFC 2638, a Bandwidth Broker is an agent that has some knowledge of an organization's priorities and policies and allocates quality of service (QoS) resources with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements. Admission control is one of the main tasks that a Bandwidth Broker has to perform, in order to decide whether an incoming resource reservation request will be accepted or not. Most Bandwidth Brokers use simple admission control modules, although there are also proposals for more sophisticated admission control according to several metrics such as acceptance rate, network utilization, etc. The BB acts as a Policy Decision Point (PDP) in deciding whether to allow or reject a flow, whilst the edge routers acts as Policy Enforcement Points (PEPs) to police traffic.

Capacity management's primary goal is to ensure that information technology resources are right-sized to meet current and future business requirements in a cost-effective manner. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.

6LoWPAN is an acronym of IPv6 over Low-Power Wireless Personal Area Networks. 6LoWPAN is the name of a concluded working group in the Internet area of the IETF.

A routing protocol specifies how routers communicate with each other, distributing information that enables them to select routes between any two nodes on a computer network. Routers perform the "traffic directing" functions on the Internet; data packets are forwarded through the networks of the internet from router to router until they reach their destination computer. Routing algorithms determine the specific choice of route. Each router has a prior knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. The ability of routing protocols to dynamically adjust to changing conditions such as disabled data lines and computers and route data around obstructions is what gives the Internet its survivability and reliability.

In multi-hop networks, Adaptive Quality of Service routing protocols have become increasingly popular and have numerous applications. One application in which it may be useful is in Mobile ad hoc networking (MANET).

Multiprotocol Label Switching - Transport Profile (MPLS-TP) is a variant of the MPLS protocol that is used in packet switched data networks. MPLS-TP is the product of a joint Internet Engineering Task Force (IETF) / International Telecommunication Union Telecommunication Standardization Sector (ITU-T) effort to include an MPLS Transport Profile within the IETF MPLS and PWE3 architectures to support the capabilities and functionalities of a packet transport network.

ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.

ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.

The Recursive InterNetwork Architecture (RINA) is a new computer network architecture proposed as an alternative to the currently mainstream TCP/IP model. The RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, the RINA proposes to support features like mobility, multi-homing and Quality of Service without the need for extra specialized protocols like RTP and UDP, as well as allow simplified network administration without the need for concepts like autonomous systems and NAT.

References