Optical burst switching

Last updated

Optical burst switching (OBS) is an optical networking technique that allows dynamic sub-wavelength switching of data. OBS is viewed as a compromise between the yet unfeasible full optical packet switching (OPS) and the mostly static optical circuit switching (OCS). It differs from these paradigms because OBS control information is sent separately in a reserved optical channel and in advance of the data payload. These control signals can then be processed electronically to allow the timely setup of an optical light path to transport the soon-to-arrive payload. This is known as delayed reservation.

Contents

Purpose

The purpose of optical burst switching (OBS) is to dynamically provision sub-wavelength granularity by optimally combining electronics and optics. OBS considers sets of packets with similar properties called bursts. Therefore, OBS granularity is finer than optical circuit switching (OCS). OBS provides more bandwidth flexibility than wavelength routing but requires faster switching and control technology. OBS can be used for realizing dynamic end-to-end all optical communications.

Method

In OBS, packets are aggregated into data bursts at the edge of the network to form the data payload. Various assembling schemes based on time and/or size exist (see burst switching). Edge router architectures have been proposed (see [1] [2] ). OBS features the separation between the control plane and the data plane. A control signal (also termed burst header or control packet) is associated to each data burst. The control signal is transmitted in optical form in a separated wavelength termed the control channel, but signaled out of band and processed electronically at each OBS router, whereas the data burst is transmitted in all optical form from one end to the other end of the network. The data burst can cut through intermediate nodes, and data buffers such as fiber delay lines may be used. In OBS data is transmitted with full transparency to the intermediate nodes in the network. After the burst has passed a router, the router can accept new reservation requests.

In a packet switched network, burst switching is a capability in which each network switch extracts routing instructions from an incoming packet header to establish and maintain the appropriate switch connection for the duration of the packet, following which the connection is automatically released.

In network routing, the control plane is the part of the router architecture that is concerned with drawing the network topology, or the information in a routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element. In most cases, the routing table contains a list of destination addresses and the outgoing interface(s) associated with them. Control plane logic also can define certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.

Advantages of OBS over OPS and OCS

Advantages over OCS

More efficient bandwidth utilization – In an OCS system, a lightpath must be set up from source to destination in the optical network. If the data transmission duration is short relative to the set up time, bandwidth may not be efficiently utilized in the OCS system. In comparison, OBS does not require end-to-end lightpath set up, and therefore may offer more efficient bandwidth utilization compared to an OCS system. This is similar to the advantage offered by packet switching over circuit switching.

Advantages over OPS

Remove throughput limitation – Optical buffer technology has not matured enough to enable low cost manufacturing and widespread use in optical networks. Core optical network nodes are likely to either be unbuffered or have limited buffers. [3] In such networks, delayed reservation schemes such as Just Enough Time (JET) [4] are combined with electronic buffering at edge routers to reserve bandwidth. Using JET can create a throughput limitation in an edge router in an OPS system. [5] This limitation can be overcome by using OBS. [6] [7]

Furthermore, there must be a guardband in the data channel between packets or bursts, so that core optical router data planes have adequate time to switch packets or bursts. If the guardband is large relative to the average packet or burst size, then it can limit data channel throughput. Aggregating packets into bursts can reduce guardband impact on data channel throughput.

Reduce processing requirements and core network energy consumption – A core optical router in an OBS network may face reduced control plane requirements when compared to that in an OPS network, as: A core optical router in an OPS network would have to perform processing operations for every arriving packet, wherelse in an OBS network the router performs processing operations for an arriving burst which contains several packets. Therefore, less processing operations per packet are required in an OBS network core optical router compared to an OPS network. Consequently, the energy consumption and potentially the carbon footprint of a core optical router in an OPS network is likely to be larger than that of an OBS network router for the same amount of data.

Carbon footprint total set of greenhouse gas emissions caused by an individual, event, organisation, or product, expressed as carbon dioxide equivalent

A carbon footprint is historically defined as the total emissions caused by an individual, event, organization, or product, expressed as carbon dioxide equivalent. Greenhouse gases (GHGs), including carbon dioxide, can be emitted through land clearance and the production and consumption of food, fuels, manufactured goods, materials, wood, roads, buildings, transportation and other services.

This advantage may be offset by the fact that an OBS network edge router is likely to be more complex than an OPS network edge router, due to the possible need for a burst assembly/aggregation and a sorting stage. Consequently, energy consumption at the edge of an OBS network may be higher than in an OPS network.

See also

Related Research Articles

Multiprotocol Label Switching (MPLS) is a routing technique in telecommunications networks that directs data from one node to the next based on short path labels rather than long network addresses, thus avoiding complex lookups in a routing table and speeding traffic flows. The labels identify virtual links (paths) between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols, hence the "multiprotocol" reference on its name. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

Router (computing) Device that forwards data packets between computer networks, creating an overlay internetwork

A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node.

In general terms, throughput is the rate of production or the rate at which something is processed.

A network switch is a computer networking device that connects devices on a computer network by using packet switching to receive, process, and forward data to the destination device.

Wormhole flow control, also called wormhole switching or wormhole routing, is a system of simple flow control in computer networking based on known fixed links. It is a subset of flow control methods called Flit-Buffer Flow Control.

Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration.

A passive optical network (PON) is a telecommunications technology used to provide fiber to the end consumer, both domestic and commercial. A PON's distinguishing feature is that it implements a point-to-multipoint architecture, in which unpowered fiber optic splitters are used to enable a single optical fiber to serve multiple end-points. The end-points are often individual customers, rather than commercial. A PON does not have to provision individual fibers between the hub and customer. Passive optical networks are often referred to as the "last mile" between an ISP and customer.

Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.

Computer network collection of autonomous computers interconnected by a single technology

A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is measured in bits per second (bit/s) or bytes per second (B/s).

Head-of-line blocking in computer networking is a performance-limiting phenomenon that occurs when a line of packets is held up by the first packet. Examples include input buffered network switches, out-of-order delivery and multiple requests in HTTP pipelining.

Connection-oriented Ethernet refers to the transformation of Ethernet, a connectionless communication system by design, into a connection-oriented system. The aim of connection-oriented Ethernet is to create a networking technology that combines the flexibility and cost-efficiency of Ethernet with the reliability of connection-oriented protocols. Connection-oriented Ethernet is used in commercial carrier grade networks.

Fractional lambda switching (FλS) leverages on time-driven switching (TDS) to realize sub-lambda switching in highly scalable dynamic optical networking, which requires minimum buffers. Fractional lambda switching implies switching fractions of optical channels as opposed to whole lambda switching where whole optical channels are the switching unit. In this context, TDS has the same general objectives as optical burst switching and optical packet switching: realizing all-optical networks with high wavelength utilization. TDS operation is based on time frames (TFs) that can be viewed as virtual containers for multiple IP packets that are switched at every TDS switch based on and coordinated by the UTC signal implementing pipeline forwarding. In the context of optical networks, synchronous virtual pipes SVPs typical of pipeline forwarding are called fractional lambda pipes (FλPs).

Optical mesh network

An optical mesh network is a type of optical telecommunications network employing wired fiber-optic communication or wireless free-space optical communication in a mesh network architecture.

Bufferbloat is a cause of high latency in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), online gaming, and even ordinary web surfing.

Data center is a pool of resources interconnected using a communication network. Data Center Network (DCN) holds a pivotal role in a data center, as it interconnects all of the data center resources together. DCNs need to be scalable and efficient to connect tens or even hundreds of thousands of servers to handle the growing demands of Cloud computing. Today’s data centers are constrained by the interconnection network.

References

Additional reading