I/O virtualization

Last updated

In virtualization, input/output virtualization (I/O virtualization) is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections. [1]

Contents

The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs). [2] Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards.

In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks. [2]

Background

Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations. [3]

In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers .

However, increased utilization created by virtualization placed a significant strain on the server’s I/O capacity. Network traffic, storage traffic, and inter-server communications combine to impose increased loads that may overwhelm the server's channels, leading to backlogs and idle CPUs as they wait for data. [4]

Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual connections to both storage and network resources. In I/O intensive applications, this approach can help increase both VM performance and the potential number of VMs per server. [2]

Virtual I/O systems that include quality of service (QoS) controls can also regulate I/O bandwidth to specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus increases the applicability of server virtualization for both production server and end-user applications. [4]

Benefits

Blade server chassis enhance density by packaging many servers (and hence many I/O connections) in a small physical space. Virtual I/O consolidates all storage and network connections to a single physical interconnect, which eliminates any physical restrictions on port counts. Virtual I/O also enables software-based configuration management, which simplifies control of the I/O devices. The combination allows more I/O ports to be deployed in a given space, and facilitates the practical management of the resulting environment. [9]

See also

Related Research Articles

Internet Small Computer Systems Interface or iSCSI is an Internet Protocol-based storage networking standard for linking data storage facilities. iSCSI provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network. iSCSI facilitates data transfers over intranets and to manage storage over long distances. It can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.

A virtual local area network (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer. In this context, virtual refers to a physical object recreated and altered by additional logic, within the local area network. VLANs work by applying tags to network frames and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.

Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data centers.

<span class="mw-page-title-main">Network interface controller</span> Hardware component that connects a computer to a network

A network interface controller is a computer hardware component that connects a computer to a computer network.

<span class="mw-page-title-main">Host adapter</span> Computer hardware device

In computer hardware, a host controller, host adapter, or host bus adapter (HBA), connects a computer system bus, which acts as the host system, to other network and storage devices. The terms are primarily used to refer to devices for connecting SCSI, SAS, NVMe, Fibre Channel and SATA devices. Devices for connecting to FireWire, USB and other devices may also be called host controllers or host adapters.

TCP offload engine (TOE) is a technology used in some network interface cards (NIC) to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10 Gigabit Ethernet, where processing overhead of the network stack becomes significant. TOEs are often used as a way to reduce the overhead associated with Internet Protocol (IP) storage protocols such as iSCSI and Network File System (NFS).

<span class="mw-page-title-main">Blade server</span> Server computer that uses less energy and space than a conventional server

A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

<span class="mw-page-title-main">Link aggregation</span> Using multiple network connections in parallel to increase capacity and reliability

In computer networking, link aggregation is the combining of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group (LAG) is the combined collection of physical ports.

<span class="mw-page-title-main">Computer network</span> Network that allows computers to share resources and communicate with each other

A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.

In computer science, storage virtualization is "the process of presenting a logical view of the physical storage resources to" a host computer system, "treating all storage media in the enterprise as a single pool of storage."

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

In computing, network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization.

<span class="mw-page-title-main">Storage area network</span> Network which provides access to consolidated, block-level data storage

A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from servers so that the devices appear to the operating system as direct-attached storage. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN).

In computer science, memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data centre, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster. The memory pool is accessed by the operating system or applications running on top of the operating system. The distributed memory pool can then be utilized as a high-speed cache, a messaging layer, or a large, shared memory resource for a CPU or a GPU application.

<span class="mw-page-title-main">Xsigo Systems</span> Computer I/O company acquired by Oracle

Xsigo Systems was an information technology and hardware company based in San Jose, California, US, specializing in data center network and I/O virtualization software and hardware to companies and enterprises.

Adaptable Modular Storage 2000 is the brand name of Hitachi Data Systems mid-range storage platforms.

<span class="mw-page-title-main">Converged network adapter</span> Computer input/output device

A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words, it "converges" access to, respectively, a storage area network and a general-purpose computer network.

HP Virtual Connect is a virtualization technology created by Hewlett-Packard (HP) that de-couples fixed blade server adapter network addresses from the associated external networks so that changes in the blade server infrastructure and the LAN and SAN environments don’t require choreography among server, LAN, and SAN teams for every task. It brings virtualization to the blade server edge. It extends virtual machine technology. Virtual machine technology moves workloads across virtual machines on a single server. It becomes a challenge when moving virtual machines from one physical machine to another or between data center locations because changes to the LAN and SAN environments require manual intervention by network and storage administrators. By pooling and sharing multiple network connections across multiple servers and virtual machines, Virtual Connect extends Data Center capability by allowing physical setup and movement of Virtual Machine workloads between servers and Virtual Machines, transparently from the LAN and SAN infrastructure. Another name for Virtual Connect is PowerConnect Switches.

In an enterprise server, a Caching SAN Adapter is a host bus adapter (HBA) for storage area network (SAN) connectivity which accelerates performance by transparently storing duplicate data such that future requests for that data can be serviced faster compared to retrieving the data from the source. A caching SAN adapter is used to accelerate the performance of applications across multiple clustered or virtualized servers and uses DRAM, NAND Flash or other memory technologies as the cache. The key requirement for the memory technology is that it is faster than the media storing the original copy of the data to ensure performance acceleration is achieved.

Disaggregated storage is a type of data storage within computer data centers. It allows compute resources within a computer server to be separated from storage resources without modifying any physical connections.

References

  1. 1 2 Scott Lowe (2008-04-21). "Virtualization strategies > Benefiting from I/O virtualization". Tech Target . Retrieved 2009-11-04.
  2. 1 2 3 Scott Hanson. "Strategies to Optimize Virtual Machine Connectivity" (PDF). Dell . Retrieved 2009-11-04.
  3. Keith Ward (March 31, 2008). "New Things to Virtualize, Virtualization Review". virtualizationreview.com. Retrieved 2009-11-04.
  4. 1 2 Charles Babcock (May 16, 2008). "Virtualization's Promise And Problems". Information Week . Retrieved 2009-11-04.
  5. Travis, Paul (June 8, 2009). "Tech Road Map: Keep An Eye On Virtual I/O". Network Computing. Retrieved 2009-11-04.
  6. Marshal, David (July 20, 2009). "PrimaCloud offers new cloud computing service built on Xsigo's Virtual I/O". InfoWorld . Retrieved 2009-11-04.
  7. Neugebauer, Damouny; Neugebauer, Rolf (June 1, 2009). "I/O Virtualization (IOV) & its uses in the network infrastructure: Part 1". Embedded.com: Embedded.com. Archived from the original on January 22, 2013. Retrieved 2009-11-04.
  8. 1 2 Lippis, Nick (May 2009). "Unified Fabric Options Are Finally Here, Lippis Report: 126". Lippis Report. Retrieved 2009-11-04.
  9. Chernicoff, David. "I/O Virtualization for Blade Servers". Windows IT Pro. Retrieved 2009-11-04.