This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. [1] SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane). [2] The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, [1] scalability and elasticity. [1] [3]
SDN was commonly associated with the OpenFlow protocol for remote communication with network plane elements to determine the path of network packets across network switches since OpenFlow's emergence in 2011. However, since 2012, proprietary systems have also used the term. [4] [5] These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform.
SD-WAN applies similar technology to a wide area network (WAN). [6]
The history of SDN principles can be traced back to the separation of the control and data plane first used in public switched telephone networks.[ citation needed ] This provided a manner of simplifying provisioning and management years before the architecture was used in data networks.
The Internet Engineering Task Force (IETF) began considering various ways to decouple the control and data forwarding functions in a proposed interface standard published in 2004 named Forwarding and Control Element Separation (ForCES). [7] The ForCES Working Group also proposed a companion SoftRouter architecture. [8] Additional early standards from the IETF that pursued separating control from data include the Linux Netlink as an IP services protocol [9] and a path computation element (PCE)-based architecture. [10]
These early attempts failed to gain traction. One reason is that many in the Internet community viewed separating control from data to be risky, especially given the potential for failure in the control plane. Another reason is that vendors were concerned that creating standard application programming interfaces (APIs) between the control and data planes would result in increased competition.
The use of open-source software in these separated architectures traces its roots to the Ethane project at Stanford's computer science department. Ethane's simple switch design led to the creation of OpenFlow, [11] and an API for OpenFlow was first created in 2008. [12] In that same year, NOX, an operating system for networks, was created. [13]
Several patent applications were filed by independent researchers in 2007 describing practical applications for SDN, [14] operating system for networks, [15] network infrastructure compute units as a multi-core CPU [16] and a method for virtual-network segmentation based on functionality. [17] These applications became public in 2009 and have since been abandoned.
SDN research included emulators such as vSDNEmul, [18] EstiNet, [19] and Mininet. [20]
Work on OpenFlow continued at Stanford, including with the creation of testbeds to evaluate the use of the protocol in a single campus network, as well as across the WAN as a backbone for connecting multiple campuses. [21] In academic settings, there were several research and production networks based on OpenFlow switches from NEC and Hewlett-Packard, as well as those based on Quanta Computer whiteboxes starting in about 2009. [22] [ failed verification ]
Beyond academia, the first deployments were by Nicira in 2010 to control OVS from Onix, codeveloped with NTT and Google. A notable deployment was Google's B4 in 2012. [23] [24] Later, Google announced the first OpenFlow/Onix deployments in is datacenters. [25] Another large deployment exists at China Mobile. [26]
The Open Networking Foundation was founded in 2011 to promote SDN and OpenFlow.
At the 2014 Interop and Tech Field Day, software-defined networking was demonstrated by Avaya using shortest-path bridging (IEEE 802.1aq) and OpenStack as an automated campus, extending automation from the data center to the end device and removing manual provisioning from service delivery. [27] [28]
SDN architectures decouple network control (control plane) and forwarding (data plane) functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. [29]
The OpenFlow protocol can be used in SDN technologies. The SDN architecture is:
The explosion of mobile devices and content, server virtualization, and the advent of cloud services are among the trends driving the networking industry to re-examine traditional network architectures. [31] Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture may be ill-suited to the dynamic computing and storage needs of today's enterprise data centers, campuses, and carrier environments. [32] Some of the key computing trends driving the need for a new network paradigm include:
The following list defines and explains the architectural components: [36]
The implementation of the SDN control plane can follow a centralized, hierarchical, or decentralized design. Initial SDN control plane proposals focused on a centralized solution, where a single control entity has a global view of the network. While this simplifies the implementation of the control logic, it has scalability limitations as the size and dynamics of the network increase. To overcome these limitations, several approaches have been proposed in the literature that fall into two categories, hierarchical and fully distributed approaches. In hierarchical solutions, [37] [38] distributed controllers operate on a partitioned network view, while decisions that require network-wide knowledge are taken by a logically centralized root controller. In distributed approaches, [39] [40] controllers operate on their local view or they may exchange synchronization messages to enhance their knowledge. Distributed solutions are more suitable for supporting adaptive SDN applications.
A key issue when designing a distributed SDN control plane is to decide on the number and placement of control entities. An important parameter to consider while doing so is the propagation delay between the controllers and the network devices, [41] especially in the context of large networks. Other objectives that have been considered involve control path reliability, [42] fault tolerance, [43] and application requirements. [44]
In SDN, the data plane is responsible for processing data-carrying packets using a set of rules specified by the control plane. The data plane may be implemented in physical hardware switches or in software implementations, such as Open vSwitch. The memory capacity of hardware switches may limit the number of rules that can be stored where as software implementations may have higher capacity. [45]
The location of the SDN data plane and agent can be used to classify SDN implementations:
Flow table entries may be populated in a proactive, reactive, or hybrid fashion. [49] [50] In the proactive mode, the controller populates flow table entries for all possible traffic matches possible for this switch in advance. This mode can be compared with typical routing table entries today, where all static entries are installed ahead of time. Following this, no request is sent to the controller since all incoming flows will find a matching entry. A major advantage in proactive mode is that all packets are forwarded in line rate (considering all flow table entries in TCAM) and no delay is added. In the reactive mode, entries are populated on demand. If a packet arrives without a corresponding match rule in the flow table, the SDN agent sends a request to the controller for further instruction it the reactive mode. The controller examines the SDN agent requests and provides instructions, installing a rule in the flow table for the corresponding packet if necessary. The hybrid mode uses the low-latency proactive forwarding mode for a portion of traffic while relying on the flexibility of reactive mode processing for the remaining traffic.
Software-defined mobile networking (SDMN) [51] [52] is an approach to the design of mobile networks where all protocol-specific features are implemented in software, maximizing the use of generic and commodity hardware and software in both the core network and radio access network. [53] It is proposed as an extension of SDN paradigm to incorporate mobile network specific functionalities. [54] Since 3GPP Rel.14, a Control User Plane Separation was introduced in the Mobile Core Network architectures with the PFCP protocol.
An SD-WAN is a WAN managed using the principles of software-defined networking. [55] The main driver of SD-WAN is to lower WAN costs using more affordable and commercially available leased lines, as an alternative or partial replacement of more expensive MPLS lines. Control and management is administered separately from the hardware with central controllers allowing for easier configuration and administration. [56]
An SD-LAN is a Local area network (LAN) built around the principles of software-defined networking, though there are key differences in topology, network security, application visibility and control, management and quality of service. [57] SD-LAN decouples control management, and data planes to enable a policy driven architecture for wired and wireless LANs. SD-LANs are characterized by their use of a cloud management system and wireless connectivity without the presence of a physical controller. [58]
SDN architecture may enable, facilitate or enhance network-related security applications due to the controller's central view of the network, and its capacity to reprogram the data plane at any time. While the security of SDN architecture itself remains an open question that has already been studied a couple of times in the research community, [59] [60] [61] [62] the following paragraphs only focus on the security applications made possible or revisited using SDN.
Several research works on SDN have already investigated security applications built upon the SDN controller, with different aims in mind. Distributed Denial of Service (DDoS) detection and mitigation, [63] [64] as well as botnet [65] and worm propagation, [66] are some concrete use-cases of such applications: basically, the idea consists in periodically collecting network statistics from the forwarding plane of the network in a standardized manner (e.g. using Openflow), and then apply classification algorithms on those statistics in order to detect any network anomalies. If an anomaly is detected, the application instructs the controller how to reprogram the data plane in order to mitigate it.
Another kind of security application leverages the SDN controller by implementing some moving target defense (MTD) algorithms. MTD algorithms are typically used to make any attack on a given system or network more difficult than usual by periodically hiding or changing key properties of that system or network. In traditional networks, implementing MTD algorithms is not a trivial task since it is difficult to build a central authority able of determining - for each part of the system to be protected - which key properties are hidden or changed. In an SDN network, such tasks become more straightforward thanks to the centrality of the controller. One application can for example periodically assign virtual IPs to hosts within the network, and the mapping virtual IP/real IP is then performed by the controller. [67] Another application can simulate some fake opened/closed/filtered ports on random hosts in the network in order to add significant noise during reconnaissance phase (e.g. scanning) performed by an attacker. [68]
Additional value regarding security in SDN enabled networks can also be gained using FlowVisor [69] and FlowChecker [70] respectively. The former tries to use a single hardware forwarding plane sharing multiple separated logical networks. Following this approach the same hardware resources can be used for production and development purposes as well as separating monitoring, configuration and internet traffic, where each scenario can have its own logical topology which is called slice. In conjunction with this approach FlowChecker [69] realizes the validation of new OpenFlow rules that are deployed by users using their own slice.
SDN controller applications are mostly deployed in large-scale scenarios, which requires comprehensive checks of possible programming errors. A system to do this called NICE was described in 2012. [71] Introducing an overarching security architecture requires a comprehensive and protracted approach to SDN. Since it was introduced, designers are looking at possible ways to secure SDN that do not compromise scalability. One architecture called SN-SECA (SDN+NFV) Security Architecture. [72]
Distributed applications that run across datacenters usually replicate data for the purpose of synchronization, fault resiliency, load balancing and getting data closer to users (which reduces latency to users and increases their perceived throughput). Also, many applications, such as Hadoop, replicate data within a datacenter across multiple racks to increase fault tolerance and make data recovery easier. All of these operations require data delivery from one machine or datacenter to multiple machines or datacenters. The process of reliably delivering data from one machine to multiple machines is referred to as Reliable Group Data Delivery (RGDD).
SDN switches can be used for RGDD via installation of rules that allow forwarding to multiple outgoing ports. For example, OpenFlow provides support for Group Tables since version 1.1 [73] which makes this possible. Using SDN, a central controller can carefully and intelligently setup forwarding trees for RGDD. Such trees can be built while paying attention to network congestion/load status to improve performance. For example, MCTCP [74] is a scheme for delivery to many nodes inside datacenters that relies on regular and structured topologies of datacenter networks while DCCast [75] and QuickCast [76] are approaches for fast and efficient data and content replication across datacenters over private WANs.
Network Function Virtualization, or NFV for short, is a concept that complements SDN. Thus, NFV is not dependent on SDN or SDN concepts. NFV separates software from hardware to enable flexible network deployment and dynamic operation. NFV deployments typically use commodity servers to run network services software versions that previously were hardware-based. These software-based services that run in an NFV environment are called Virtual Network Functions (VNF). [77] SDN-NFV hybrid program was provided for high efficiency, elastic and scalable capabilities NFV aimed at accelerating service innovation and provisioning using standard IT virtualization technologies. [77] [78] SDN provides the agility of controlling the generic forwarding devices such as the routers and switches by using SDN controllers. On the other hand, NFV agility is provided for the network applications by using virtualized servers. It is entirely possible to implement a virtualized network function (VNF) as a standalone entity using existing networking and orchestration paradigms. However, there are inherent benefits in leveraging SDN concepts to implement and manage an NFV infrastructure, particularly when looking at the management and orchestration of VNFs, and that's why multivendor platforms are being defined that incorporate SDN and NFV in concerted ecosystems. [79]
DPI Deep Packet Inspection provides network with application-awareness, while SDN provides applications with network-awareness. [80] Although SDN will radically change the generic network architectures, it should cope with working with traditional network architectures to offer high interoperability. The new SDN based network architecture should consider all the capabilities that are currently provided in separate devices or software other than the main forwarding devices (routers and switches) such as the DPI, security appliances [81]
When using an SDN based model for transmitting multimedia traffic, an important aspect to take account is the QoE estimation. To estimate the QoE, first we have to be able to classify the traffic and then, it's recommended that the system can solve critical problems on its own by analyzing the traffic. [82] [83]
A router is a computer and networking device that forwards data packets between computer networks, including internetworks such as the global Internet.
A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals.
A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects, downloadable objects, applications, live streaming media, on-demand streaming media, and social media sites.
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain.
WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.
In network routing, the control plane is the part of the router architecture that is concerned with establishing the network topology, or the information in a routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element. In most cases, the routing table contains a list of destination addresses and the outgoing interface(s) associated with each. Control plane logic also can identify certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
In routing, the data plane, sometimes called the forwarding plane or user plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which the router looks up the destination address of the incoming packet and retrieves the information necessary to determine the path from the receiving element, through the internal forwarding fabric of the router, and to the proper outgoing interface(s).
In computing, network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.
A reliable multicast is any computer networking protocol that provides a reliable sequence of packets to multiple recipients simultaneously, making it suitable for applications such as multi-receiver file transfer.
OpenFlow is a communications protocol that gives access to the forwarding plane of a network switch or router over the network.
Hewlett Packard Enterprise Networking is the Networking Products division of Hewlett Packard Enterprise ("HP"). HPE Networking and its predecessor entities have developed and sold networking products since 1979. Currently, it offers networking and switching products for small and medium sized businesses through its wholly owned subsidiary Aruba Networks. Prior to 2015, the entity within HP which offered networking products was called HP Networking.
Network functions virtualization (NFV) is a network architecture concept that leverages IT virtualization technologies to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create and deliver communication services.
A network virtualization platform decouples the hardware plane from the software plane such that the host hardware plane can be administratively programmed to assign its resources to the software plane. This allows for the virtualization of CPU, memory, disk and most importantly network IO. Upon such virtualization of hardware resources, the platform can accommodate multiple virtual network applications such as firewalls, routers, Web filters, and intrusion prevention systems, all functioning much like standalone hardware appliances, but contained within a single hardware appliance. The key benefit to such technology is doing all of this while maintaining the network performance typically seen with that of standalone network appliances as well as enabling the ability to administratively or dynamically program resources at will.
Distributed Overlay Virtual Ethernet (DOVE) is a tunneling and virtualization technology for computer networks, created and backed by IBM. DOVE allows creation of network virtualization layers for deploying, controlling, and managing multiple independent and isolated network applications over a shared physical network infrastructure.
Open vSwitch (OVS) is an open-source implementation of a distributed virtual multilayer switch. The main purpose of Open vSwitch is to provide a switching stack for hardware virtualization environments, while supporting multiple protocols and standards used in computer networks.
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.
Albert Greenberg is an American software engineer and computer scientist who is notable for his contributions to the design of operating carrier and datacenter networks as well as to advances in computer networking and cloud computing. He currently serves as Vice President of Platform Engineering at Uber.
A Software-Defined Wide Area Network (SD-WAN) is a wide area network that uses software-defined networking technology, such as communicating over the Internet using overlay tunnels which are encrypted when destined for internal organization locations.
5G network slicing is a network architecture that enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each network slice is an isolated end-to-end network tailored to fulfill diverse requirements requested by a particular application.
{{citation}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help)CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help)CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: |author=
has generic name (help)CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite conference}}
: CS1 maint: location (link)