Virtualization

Last updated

In computing, virtualization (v12n) is a series of technologies that allows dividing of physical computing resources into a series of virtual machines, operating systems, processes or containers. [1]

Contents

Screenshot of one virtualization environment QEMU 6.2 screenshot.png
Screenshot of one virtualization environment

Virtualization began in the 1960s with IBM CP/CMS. [1] The control program CP provided each user with a simulated stand-alone System/360 computer.

In hardware virtualization, the host machine is the machine that is used by the virtualization and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or virtual machine monitor. [2] Hardware virtualization is not the same as hardware emulation. Hardware-assisted virtualization facilitates building a virtual machine monitor and allows guest OSes to be run in isolation.

Desktop virtualization is the concept of separating the logical desktop from the physical machine.

Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances.

The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization.

History

A form of virtualization was first demonstrated with IBM's CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967–1972, and re-implemented in IBM's VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer. Each such virtual machine had the complete capabilities of the underlying machine, and (for its user) the virtual machine was indistinguishable from a private system. This simulation was comprehensive, and was based on the Principles of Operation manual for the hardware. It thus included such elements as an instruction set, main memory, interrupts, exceptions, and device access. The result was a single machine that could be multiplexed among many users.

Hardware-assisted virtualization first appeared on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. IBM added virtual memory hardware to the System/370 series in 1972 which is not the same as Intel VT-x Rings providing a higher privilege level for Hypervisor to properly control Virtual Machines requiring full access to Supervisor and Program or User modes.

With the increasing demand for high-definition computer graphics (e.g. CAD), virtualization of mainframes lost some attention in the late 1970s, when the upcoming minicomputers fostered resource allocation through distributed computing, encompassing the commoditization of microcomputers.

The increase in compute capacity per x86 server (and in particular the substantial increase in modern networks' bandwidths) rekindled interest in data-center based computing which is based on virtualization techniques. The primary driver was the potential for server consolidation: virtualization allowed a single server to cost-efficiently consolidate compute power on multiple underutilized dedicated servers. The most visible hallmark of a return to the roots of computing is cloud computing, which is a synonym for data center based computing (or mainframe-like computing) through high bandwidth networks. It is closely connected to virtualization.

The initial implementation x86 architecture did not meet the Popek and Goldberg virtualization requirements to achieve "classical virtualization":

This made it difficult to implement a virtual machine monitor for this type of processor. Specific limitations included the inability to trap on some privileged instructions. [3] Therefore, to compensate for these architectural limitations, designers accomplished virtualization of the x86 architecture through two methods: full virtualization or paravirtualization. [4] Both create the illusion of physical hardware to achieve the goal of operating system independence from the hardware but present some trade-offs in performance and complexity.

Full virtualization was not fully available on the x86 platform prior to 2005. Many platform hypervisors for the x86 platform came very close and claimed full virtualization (such as Adeos, Mac-on-Linux, Parallels Desktop for Mac, Parallels Workstation, VMware Workstation, VMware Server (formerly GSX Server), VirtualBox, Win4BSD, and Win4Lin Pro).

In 2005 and 2006, Intel and AMD (working independently) created new processor extensions to the x86 architecture called Intel VT-x and AMD-V, respectively. On the Itanium architecture, hardware-assisted virtualization is known as VT-i. The first generation of x86 processors to support these extensions were released in late 2005 early 2006:

Hardware virtualization

Hardware virtualization (or platform virtualization) pools computing resources across one or more virtual machines. A virtual machine implements functionality of a (physical) computer with an operating system. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or virtual machine monitor. [2]

Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Arch Linux may host a virtual machine that looks like a computer with the Microsoft Windows operating system; Windows-based software can be run on the virtual machine. [5] [6]

Different types of hardware virtualization include:

Full virtualization

Logical diagram of full virtualization Hardware Virtualization (copy).svg
Logical diagram of full virtualization

Full virtualization employs techniques that pools physical computer resources into one or more instances; each running a virtual environment where any software or operating system capable of execution on the raw hardware can be run in the virtual machine. Two common full virtualization techniques are typically used: (a) binary translation and (b) hardware-assisted full virtualization. [1] Binary translation automatically modifies the software on-the-fly to replace instructions that "pierce the virtual machine" with a different, virtual machine safe sequence of instructions. [7] Hardware-assisted virtualization allows guest operating systems to be run in isolation with virtually no modification to the (guest) operating system.

Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine.

This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family.

Binary translation

In binary translation, instructions are translated to match the emulated hardware architecture. [1] A piece of hardware imitates another while in hardware assisted virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs. [8]

Hardware assisted

Hardware-assisted virtualization (or accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization) is a way of improving overall efficiency of hardware virtualization using help from the host processors. A full virtualization is used to emulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) effectively executes in complete isolation.

Hardware-assisted virtualization was first introduced on the IBM 308X processors in 1980, with the Start Interpretive Execution (SIE) instruction. [9] It was added to x86 processors (Intel VT-x, AMD-V or VIA VT) in 2005, 2006 and 2010 [10] respectively.

IBM offers hardware virtualization for its IBM Power Systems hardware for AIX, Linux and IBM i, and for its IBM Z mainframes. IBM refers to its specific form of hardware virtualization as "logical partition", or more commonly as LPAR.

Hardware-assisted virtualization reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates) the changes needed in the guest operating system. It is also considerably easier to obtain better performance.

Paravirtualization

Paravirtualization is a virtualization technique that presents a software interface to the virtual machines which is similar, yet not identical, to the underlying hardware–software interface. Paravirtualization improves performance and efficiency, compared to full virtualization, by having the guest operating system communicate with the hypervisor. By allowing the guest operating system to indicate its intent to the hypervisor, each can cooperate to obtain better performance when running in a virtual machine.

The intent of the modified interface is to reduce the portion of the guest's execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined 'hooks' to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse). A successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine execution inside the virtual guest.

Paravirtualization requires the guest operating system to be explicitly ported for the para-API – a conventional OS distribution that is not paravirtualization-aware cannot be run on top of a paravirtualizing VMM. However, even in cases where the operating system cannot be modified, components may be available that enable many of the significant performance advantages of paravirtualization. For example, the Xen Windows GPLPV project provides a kit of paravirtualization-aware device drivers, that are intended to be installed into a Microsoft Windows virtual guest running on the Xen hypervisor. [11] Such applications tend to be accessible through the paravirtual machine interface environment. This ensures run-mode compatibility across multiple encryption algorithm models, allowing seamless integration within the paravirtual framework. [12]

History

The term "paravirtualization" was first used in the research literature in association with the Denali Virtual Machine Manager. [13] The term is also used to describe the Xen, L4, TRANGO, VMware, Wind River and XtratuM hypervisors. All these projects use or can use paravirtualization techniques to support high performance virtual machines on x86 hardware by implementing a virtual machine that does not implement the hard-to-virtualize parts of the actual x86 instruction set. [14]

In 2005, VMware proposed a paravirtualization interface, the Virtual Machine Interface (VMI), as a communication mechanism between the guest operating system and the hypervisor. This interface enabled transparent paravirtualization in which a single binary version of the operating system can run either on native hardware or on a hypervisor in paravirtualized mode.

The first appearance of paravirtualization support in Linux occurred with the merge of the ppc64 port in 2002, [15] which supported running Linux as a paravirtualized guest on IBM pSeries (RS/6000) and iSeries (AS/400) hardware.

At the USENIX conference in 2006 in Boston, Massachusetts, a number of Linux development vendors (including IBM, VMware, Xen, and Red Hat) collaborated on an alternative form of paravirtualization, initially developed by the Xen group, called "paravirt-ops". [16] The paravirt-ops code (often shortened to pv-ops) was included in the mainline Linux kernel as of the 2.6.23 version, and provides a hypervisor-agnostic interface between the hypervisor and guest kernels. Distribution support for pv-ops guest kernels appeared starting with Ubuntu 7.04 and RedHat 9. Xen hypervisors based on any 2.6.24 or later kernel support pv-ops guests, as does VMware's Workstation product beginning with version 6. [17]

Hybrid virtualization

Hybrid virtualization combines full virtualization techniques with paravirtualized drivers to overcome limitations with hardware-assisted full virtualization. [18]

A hardware-assisted full virtualization approach uses an unmodified guest operating system that involves many VM traps producing high CPU overheads limiting scalability and the efficiency of server consolidation. [19] The hybrid virtualization approach overcomes this problem.

Desktop virtualization

Desktop virtualization separates the logical desktop from the physical machine.

One form of desktop virtualization, virtual desktop infrastructure (VDI), can be thought of as a more advanced form of hardware virtualization. Rather than interacting with a host computer directly via a keyboard, mouse, and monitor, the user interacts with the host computer using another desktop computer or a mobile device by means of a network connection, such as a LAN, Wireless LAN or even the Internet. In addition, the host computer in this scenario becomes a server computer capable of hosting multiple virtual machines at the same time for multiple users. [20]

As organizations continue to virtualize and converge their data center environment, client architectures also continue to evolve in order to take advantage of the predictability, continuity, and quality of service delivered by their converged infrastructure. For example, companies like HP and IBM provide a hybrid VDI model with a range of virtualization software and delivery models to improve upon the limitations of distributed client computing. [21] Selected client environments move workloads from PCs and other devices to data center servers, creating well-managed virtual clients, with applications and client operating environments hosted on servers and storage in the data center. For users, this means they can access their desktop from any location, without being tied to a single client device. Since the resources are centralized, users moving between work locations can still access the same client environment with their applications and data. [21] For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to more quickly respond to the changing needs of the user and business. [22] Another form, session virtualization, allows multiple users to connect and log into a shared but powerful computer over the network and use it simultaneously. Each is given a desktop and a personal folder in which they store their files. [20] With multiseat configuration, session virtualization can be accomplished using a single PC with multiple monitors, keyboards, and mice connected.

Thin clients, which are seen in desktop virtualization, are simple and/or cheap computers that are primarily designed to connect to the network. They may lack significant hard disk storage space, RAM or even processing power, but many organizations are beginning to look at the cost benefits of eliminating "thick client" desktops that are packed with software (and require software licensing fees) and making more strategic investments. [23]

Desktop virtualization simplifies software versioning and patch management, where the new image is simply updated on the server, and the desktop gets the updated version when it reboots. It also enables centralized control over what applications the user is allowed to have access to on the workstation.

Moving virtualized desktops into the cloud creates hosted virtual desktops (HVDs), in which the desktop images are centrally managed and maintained by a specialist hosting firm. Benefits include scalability and the reduction of capital expenditure, which is replaced by a monthly operational cost. [24]

Containerization

Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers, [25] partitions, virtual environments (VEs) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container's contents and devices assigned to the container.

This provides many of the benefits that virtual machines have such as standardization and scalability, while using less resources as the kernel is shared between containers. [26]

Containerization started gaining prominence in 2014, with the introduction of Docker. [27] [28]

Miscellaneous types

Software
Memory
Storage
Data
Network

Benefits and disadvantages

Virtualization, in particular, full virtualization has proven beneficial for:

A common goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single central processing unit (CPU). This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS. Using virtualization, an enterprise can better manage updates and rapid changes to the operating system and applications without disrupting the user. "

Ultimately, virtualization dramatically improves the efficiency and availability of resources and applications in an organization. Instead of relying on the old model of "one server, one application" that leads to underutilized resources, virtual resources are dynamically applied to meet business needs without any excess fat". [30]

Virtual machines running proprietary operating systems require licensing, regardless of the host machine's operating system. For example, installing Microsoft Windows into a VM guest requires its licensing requirements to be satisfied. [31] [32] [33]

See also

Related Research Articles

<span class="mw-page-title-main">Virtual machine</span> Software that emulates an entire computer

In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:

<span class="mw-page-title-main">Xen</span> Type-1 hypervisor

Xen is a free and open-source type-1 hypervisor, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. It was originally developed by the University of Cambridge Computer Laboratory and is now being developed by the Linux Foundation with support from Intel, Citrix, Arm Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and EPAM Systems.

x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.

A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system–level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

<span class="mw-page-title-main">QEMU</span> Free virtualization and emulation software

QEMU is a free and open-source emulator. It emulates a computer's processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems. It can interoperate with Kernel-based Virtual Machine (KVM) to run virtual machines at near-native speed. QEMU also provides emulation for user-level processes, allowing applications compiled for one processor architecture to run on another.

Platform virtualization software, specifically emulators and hypervisors, are software packages that emulate the whole physical computer machine, often providing multiple virtual machines on one physical platform. The table below compares basic information about platform virtualization hypervisors.

OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers, zones, virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels, and jails. Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources of that computer. Programs running inside a container can only see the container's contents and devices assigned to the container.

In computing, virtualization is the use of a computer to simulate another computer. The following is a chronological list of virtualization technologies.

<span class="mw-page-title-main">Desktop virtualization</span> Software technology

Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.

<span class="mw-page-title-main">VMware ESXi</span> Enterprise-class, type-1 hypervisor for deploying and serving virtual computers

VMware ESXi is an enterprise-class, type-1 hypervisor developed by VMware, a subsidiary of Broadcom, for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that is installed on an operating system (OS); instead, it includes and integrates vital OS components, such as a kernel.

Virtual Iron Software, was located in Lowell, Massachusetts, sold proprietary software for virtualization and management of a virtual infrastructure. Co-founded by Alex Vasilevsky, Virtual Iron figured among the first companies to offer virtualization software to fully support Intel VT-x and AMD-V hardware-assisted virtualization.

Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization emulates the hardware environment of its host architecture, allowing multiple OSes to run unmodified and in isolation. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.

<span class="mw-page-title-main">Hyper-V</span> Native hypervisor by Microsoft

Microsoft Hyper-V, codenamed Viridian, and briefly known before its release as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks. Hyper-V was first released with Windows Server 2008, and has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free, but has a command-line interface only. The last version of free Hyper-V Server is Hyper-V Server 2019, which is based on Windows Server 2019.

Infrastructure as a service (IaaS) is a cloud computing service model where a cloud services vendor provides computing resources such as storage, network, servers, and virtualization. This service frees users from maintaining their own data center, but they must install and maintain the operating system and application software. Iaas provides users high-level APIs to control details of underlying network infrastructure such as backup, data partitioning, scaling, security and physical computing resources. Services can be scaled on-demand by the user. According to the Internet Engineering Task Force (IETF), such infrastructure is the most basic cloud-service model. IaaS can be hosted in a public cloud, a private cloud, or a hybrid cloud.

MultiSeat Desktop Virtualization is a method by which a common desktop PC, with extra keyboards, mice, and video screens directly attached to it, can be used to install, load, and concurrently run multiple operating systems. These operating systems can be the same across all "seats" or they can be different. It is similar to server based computing only in the fact that one mainframe is supporting multiple users. On the other hand, it is different because the "terminals" are composed of nothing more than the regular keyboard, monitor and mouse, and these devices are plugged directly into the PC. USB hubs can be used for cable management of the keyboards and mice, and extra video cards may need to be installed.

Wanova, Inc, headquartered in San Jose, California, provides software allowing IT organizations to manage, support and protect data on desktop and laptop computers. Wanova's primary product, Wanova Mirage, was designed as an alternative to server-hosted desktop virtualization technologies.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

<span class="mw-page-title-main">Qubes OS</span> Security-focused Linux-based operating system

Qubes OS is a security-focused desktop operating system that aims to provide security through isolation. Isolation is provided through the use of virtualization technology. This allows the segmentation of applications into secure virtual machines called qubes. Virtualization services in Qubes OS are provided by the Xen hypervisor.

GPU virtualization refers to technologies that allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine. GPU virtualization is used in various applications such as desktop virtualization, cloud gaming and computational science.

A system virtual machine is a virtual machine (VM) that provides a complete system platform and supports the execution of a complete operating system (OS). These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use, or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness, or both. A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine".

References

  1. 1 2 3 4 Rodríguez-Haro, Fernando; Freitag, Felix; Navarro, Leandro; Hernánchez-sánchez, Efraín; Farías-Mendoza, Nicandro; Guerrero-Ibáñez, Juan Antonio; González-Potes, Apolinar (2012-01-01). "A summary of virtualization techniques". Procedia Technology. The 2012 Iberoamerican Conference on Electronics Engineering and Computer Science. 3: 267–272. doi: 10.1016/j.protcy.2012.03.029 . ISSN   2212-0173.
  2. 1 2 Turban, E; King, D.; Lee, J.; Viehland, D. (2008). "19". Electronic Commerce A Managerial Perspective (PDF) (5th ed.). Prentice-Hall. p. 27.
  3. Adams, Keith. "A Comparison of Software and Hardware Techniques for x86 Virtualization" (PDF). Retrieved 20 January 2013.
  4. Chris Barclay, New approach to virtualizing x86s, Network World, 20 October 2006
  5. Turban, E; King, D; Lee, J; Viehland, D (2008). "Chapter 19: Building E-Commerce Applications and Infrastructure". Electronic Commerce A Managerial Perspective. Prentice-Hall. p. 27.
  6. "Virtualization in education" (PDF). IBM. October 2007. Retrieved 6 July 2010. A virtual computer is a logical representation of a computer in software. By decoupling the physical hardware from the operating system, virtualization provides more operational flexibility and increases the utilization rate of the underlying physical hardware.
  7. VMware (11 Sep 2007). "Understanding Full Virtualization, Paravirtualization, and Hardware Assist" (PDF). VMware. Archived (PDF) from the original on 2008-05-11. Retrieved 2021-05-20.
  8. Creasy, R.J. (1981). "The Origin of the VM/370 Time-sharing System" (PDF). IBM . Retrieved 26 February 2013.
  9. IBM System/370 Extended Architecture Interpretive Execution (PDF) (First ed.). IBM. January 1984. SA22-7095-0. Retrieved October 27, 2022.
  10. "VIA Introduces New VIA Nano 3000 Series Processors". www.via.com.tw (Press release). Archived from the original on 22 January 2013. Retrieved 10 October 2022.
  11. "Installing signed GPLPV drivers in Windows Xen instances". Univention Wiki. Retrieved 2013-04-10. The GPLPV driver is a driver for Microsoft Windows, which enables Windows DomU systems virtualised in Xen to access the network and block drivers of the Xen Dom0. This provides a significant performance and reliability gain over the standard devices emulated by Xen/Qemu/Kvm.
  12. Armstrong, D (2011). "Performance issues in clouds: An evaluation of virtual image propagation and I/O paravirtualization". The Computer Journal. 54 (6): 836–849. doi:10.1093/comjnl/bxr011.
  13. A. Whitaker; M. Shaw; S. D. Gribble (2002). "Denali: Lightweight Virtual Machines for Distributed and Networked Applications". University of Washington Technical Report. Archived from the original on 2008-01-14. Retrieved 2006-12-09.
  14. Strobl, Marius (2013). Virtualization for Reliable Embedded Systems. Munich: GRIN Publishing GmbH. p. 54,63. ISBN   978-3-656-49071-5.
  15. Anton Blanchard. "Add ppc64 support". kernel.org. Retrieved 2024-04-24.
  16. "XenParavirtOps – Xen". Wiki.xenproject.org. Retrieved 2017-03-03.
  17. "VMware Introduces Support for Cross-Platform Paravirtualization – VMware". VMware. 16 May 2008. Archived from the original on 13 April 2011.
  18. Jun Nakajima and Asit K. Mallick, "Hybrid-Virtualization—Enhanced Virtualization for Linux" Archived 2009-01-07 at the Wayback Machine , in Proceedings of the Linux Symposium, Ottawa, June 2007.
  19. See "Hybrid Virtualization: The Next Generation of XenLinux". Archived March 20, 2009, at the Wayback Machine
  20. 1 2 "Strategies for Embracing Consumerization" (PDF). Microsoft Corporation. April 2011. p. 9. Archived from the original (PDF) on 15 August 2011. Retrieved 22 July 2011.
  21. 1 2 Chernicoff, David, "HP VDI Moves to Center Stage", ZDNet, August 19, 2011.
  22. Baburajan, Rajani, "The Rising Cloud Storage Market Opportunity Strengthens Vendors", infoTECH, August 24, 2011. It.tmcnet.com. 2011-08-24.
  23. "Desktop Virtualization Tries to Find Its Place in the Enterprise". Dell.com. Retrieved 2012-06-19.
  24. "HVD: the cloud's silver lining" (PDF). Intrinsic Technology. Archived from the original (PDF) on 2 October 2012. Retrieved 30 August 2012.
  25. Hogg, Scott (2014-05-26). "Software Containers: Used More Frequently than Most Realize". Network World. Network World, Inc. Retrieved 2015-07-09.
  26. Gandhi, Rajeev (2019-02-06). "The Benefits of Containerization and What It Means for You". IBM Blog. Retrieved 2024-03-15.
  27. Vaughan-Nichols, Steven J. (21 March 2018). "What is Docker and why is it so darn popular?". ZDNet . CBS Interactive.
  28. Butler, Brandon (10 June 2014). "Docker 101: What it is and why it's important". Network World . IDG.
  29. "Enterprise Systems Group White paper, Page 5" (PDF). Enterprise Strategy Group White Paper written and published on August 20, 2011 by Mark Peters. Archived from the original (PDF) on March 30, 2012. Retrieved July 18, 2013.
  30. "Virtualization in education" (PDF). IBM. October 2007. Retrieved 6 July 2010.
  31. Foley, Mary Jo (5 July 2012). "Microsoft goes public with Windows Server 2012 versions, licensing". ZDNet . CBS Interactive. Finn explained that Standard covers 2 CPUs in a host, and goes from one VOSE (virtual operating system environment - 1 free Std install in a VM on that host) to two, and 'now has all the features and scalability of Datacenter.' He noted there will be a small price increase, but said he thought that wouldn't matter, as it 'should be virtualized anyway and the VOSE rights doubling will compensate. Windows Server Datacenter was a minimum of two 1-CPU licenses with unlimited VOSEs. 'Now it is a simpler SKU that covers two CPUs in a host with unlimited VOSEs,' Finn said.
  32. "Windows Server 2012 Licensing and Pricing FAQ" (PDF). Microsoft. Retrieved 5 July 2012.
  33. "Licensing Windows desktop operating system for use with virtual machines" (PDF). microsoft.com. Microsoft . Retrieved 22 December 2018.