Cut-through switching

Last updated

In computer networking, cut-through switching, also called cut-through forwarding [1] is a method for packet switching systems, wherein the switch starts forwarding a frame (or packet) before the whole frame has been received, normally as soon as the destination address and outgoing interface is determined. Compared to store and forward, this technique reduces latency through the switch and relies on the destination devices for error handling. Pure cut-through switching is only possible when the speed of the outgoing interface is at least equal or higher than the incoming interface speed.

Contents

Adaptive switching dynamically selects between cut-through and store and forward behaviors based on current network conditions.

Cut-through switching is closely associated with wormhole switching. [2] [3]

Use in Ethernet

When cut-through switching is used in Ethernet the switch is not able to verify the integrity of an incoming frame before forwarding it.

The technology was developed by Kalpana, the company that introduced the first Ethernet switch. [4]

The primary advantage of cut-through Ethernet switches, compared to store-and-forward Ethernet switches, is lower latency. [1] Cut-through Ethernet switches can support an end-to-end network delay latency of about ten microseconds. End-to-end application latencies below 3 microseconds require specialized hardware such as InfiniBand. [1]

A cut-through switch will forward corrupted frames, whereas a store and forward switch will drop them. [5] Fragment free is a variation on cut-through switching that partially addresses this problem by assuring that collision fragments are not forwarded. Fragment free will hold the frame until the first 64 bytes are read from the source to detect a collision before forwarding. This is only useful if there is a chance of a collision on the source port. [6]

The theory here is that frames that are damaged by collisions are often shorter than the minimum valid Ethernet frame size of 64 bytes. With a fragment-free buffer, the first 64 bytes of each frame update the source MAC and port if necessary, provide the destination MAC, and allow forwarding the frame. If the frame is less than 64 bytes, it is discarded. Frames that are smaller than 64 bytes are called runts; this is why fragment-free switching is sometimes called “runt-less” switching. Because the switch only ever buffers 64 bytes of each frame, fragment-free is a faster mode than store-and-forward, but there still exists a risk of forwarding bad frames. [7]

There are certain scenarios that force a cut-through Ethernet switch to buffer the entire frame, acting like a store-and-forward Ethernet switch for that frame:

Use in Fibre Channel

Cut-through switching is the dominant switching architecture in Fibre Channel due to the low-latency performance required for SCSI traffic. Brocade has implemented cut-through switching in its Fibre Channel ASICs since the 1990s and has been implemented in tens of millions of ports in production SANs worldwide. CRC errors are detected in a cut-through switch and indicated by marking the corrupted frame EOF field as "invalid". The destination devices (host or storage) sees the invalid EOF and discards the frame prior to sending it to the application or LUN. Discarding corrupted frames by the destination device is a 100% reliable method for error handling and is mandated by Fibre Channel standards driven by Technical Committee T11. Discarding corrupted frames at the destination device also minimizes the time to recover bad frames. As soon as the destination device receives the EOF marker as "invalid", recovery of the corrupted frame can begin. With store and forward, the corrupted frame is discarded at the switch forcing a SCSI timeout and a SCSI retry for recovery that can result in delays of tens of seconds.[ citation needed ]

Use in ATM

Cut-through switching was one of the important features of IP networks using ATM networks since the edge routers of the ATM network were able to use cell switching through the core of the network with low latency at all points. With higher speed links, this has become less of a problem since packet latency has become much smaller.[ citation needed ]

Use in InfiniBand

Cut-through switching is very popular in InfiniBand networks, since these are often deployed in environments where latency is a prime concern, such as supercomputer clusters.[ citation needed ]

Use in SMTP

A closely allied concept is offered [8] by the Exim mail transfer agent. When operating as a forwarder the onward connection can be made to the destination while the source connection is still open. This permits data-time rejection (due, for example, to content-scanning) by the target MTA to be notified to the source MTA within the SMTP connection, rather than the traditional bounce message necessitated by the more usual store-and-forward operation.[ citation needed ]

Use in Bitcoin

Cut-through switching has been applied to make block-relay lower latency in Bitcoin. [9] Low latency is critical for Bitcoin miners to reduce the rate at which their blocks are orphaned.

See also

Related Research Articles

<span class="mw-page-title-main">Ethernet</span> Computer networking technology

Ethernet is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.

The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. For example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications.

In computer networking, the maximum transmission unit (MTU) is the size of the largest protocol data unit (PDU) that can be communicated in a single network layer transaction. The MTU relates to, but is not identical to the maximum frame size that can be transported on the data link layer, e.g., Ethernet frame.

A network switch is networking hardware that connects devices on a computer network by using packet switching to receive and forward data to the destination device.

The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical topology for Ethernet networks. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include backup links providing fault tolerance if an active link fails.

Wormhole flow control, also called wormhole switching or wormhole routing, is a system of simple flow control in computer networking based on known fixed links. It is a subset of flow control methods called Flit-Buffer Flow Control.

A multilayer switch (MLS) is a computer networking device that switches on OSI layer 2 like an ordinary network switch and provides extra functions on higher OSI layers. The MLS was invented by engineers at Digital Equipment Corporation.

<span class="mw-page-title-main">Ethernet flow control</span> Technique to suspend transmission to avoid congestion

Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The goal of this mechanism is to avoid packet loss in the presence of network congestion.

<span class="mw-page-title-main">Network bridge</span> Device that creates a larger computer network from two smaller networks

A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer. If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.

In computer networking, jumbo frames are Ethernet frames with more than 1500 bytes of payload, the limit set by the IEEE 802.3 standard. The payload limit for jumbo frames is variable: while 9000 bytes is the most commonly used limit, smaller and larger limits exist. Many Gigabit Ethernet switches and Gigabit Ethernet network interface controllers and some Fast Ethernet switches and Fast Ethernet network interface cards can support jumbo frames.

A forwarding information base (FIB), also known as a forwarding table or MAC table, is most commonly used in network bridging, routing, and similar functions to find the proper output network interface controller to which the input interface should forward a packet. It is a dynamic table that maps MAC addresses to ports. It is the essential mechanism that separates network switches from Ethernet hubs. Content-addressable memory (CAM) is typically used to efficiently implement the FIB, thus it is sometimes called a CAM table.

<span class="mw-page-title-main">Data plane</span> Router architecture

In routing, the data plane, sometimes called the forwarding plane or user plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which the router looks up the destination address of the incoming packet and retrieves the information necessary to determine the path from the receiving element, through the internal forwarding fabric of the router, and to the proper outgoing interface(s).

<span class="mw-page-title-main">Fibre Channel over Ethernet</span> Computer network technology

Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. FCoE did not see widespread adoption.

High-availability Seamless Redundancy (HSR) is a network protocol for Ethernet that provides seamless failover against failure of any single network component. PRP and HSR are independent of the application-protocol and can be used by most Industrial Ethernet protocols in the IEC 61784 suite. HSR does not cover the failure of end nodes, but redundant nodes can be connected via HSR.

ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.

ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.

Arista Networks, Inc. is an American computer networking company headquartered in Santa Clara, California. The company designs and sells multilayer network switches to deliver software-defined networking (SDN) for large datacenter, cloud computing, high-performance computing, and high-frequency trading environments. These products include 10/25/40/50/100/200/400/800 gigabit low-latency cut-through Ethernet switches. Arista's Linux-based network operating system, Extensible Operating System (EOS), runs on all Arista products.

In computer networking, a unicast flood occurs when a switch receives a unicast frame and the switch does not know that the addressee is on any particular switch port. Since the switch has no information regarding which port, if any, the addressee might be reached through, it forwards the frame through all ports aside from the one through which the frame was received.

Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.

<span class="mw-page-title-main">Broadcast, unknown-unicast and multicast traffic</span> Computer networking concept

Broadcast, unknown-unicast and multicast traffic is network traffic transmitted using one of three methods of sending data link layer network traffic to a destination of which the sender does not know the network address. This is achieved by sending the network traffic to multiple destinations on an Ethernet network. As a concept related to computer networking, it includes three types of Ethernet modes: broadcast, unicast and multicast Ethernet. BUM traffic refers to that kind of network traffic that will be forwarded to multiple destinations or that cannot be addressed to the intended destination only.

References

  1. 1 2 3 4 Cisco. https://www.cisco.com/c/en/us/products/collateral/switches/nexus-5020-switch/white_paper_c11-465436.html "Cut-Through and Store-and-Forward Ethernet Switching for Low-Latency Environments"].
  2. Stefan Haas. "The IEEE 1355 Standard: Developments, Performance and Application in High Energy Physics". 1998. p. 59.
  3. Patrick Geoffray; Torsten Hoefler. "Adaptive Routing Strategies for Modern High Performance Networks". ISBN   978-0-7695-3380-3. 2008. p. 2.
  4. "Cisco to Acquire Kalpana, Leading Ethernet Switching Company". Cisco Systems, Inc. Archived from the original on 2010-02-07.
  5. "Cut-Through and Store-and-Forward Ethernet Switching for Low-Latency Environments". Cisco. Retrieved 2011-11-10.
  6. "Switches - What Are Forwarding Modes and How Do They Work?". Archived from the original on 2014-04-19. Retrieved 2011-08-13.
  7. "Switching – Store and forward, Cut-through and Fragment free". Archived from the original on 2013-11-11. Retrieved 2013-11-11.
  8. "Specification of the Exim Mail Transfer Agent" . Retrieved 2015-01-24.
  9. "Falcon Network" . Retrieved 2016-06-27.