GPU virtualization refers to technologies that allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine. GPU virtualization is used in various applications such as desktop virtualization, [1] cloud gaming [2] and computational science (e.g. hydrodynamics simulations). [3]
GPU virtualization implementations generally involve one or more of the following techniques: device emulation, API remoting, fixed pass-through and mediated pass-through. Each technique presents different trade-offs regarding virtual machine to GPU consolidation ratio, graphics acceleration, rendering fidelity and feature support, portability to different hardware, isolation between virtual machines, and support for suspending/resuming and live migration. [1] [4] [5] [6]
In API remoting or API forwarding, calls to graphical APIs from guest applications are forwarded to the host by remote procedure call, and the host then executes graphical commands from multiple guests using the host's GPU as a single user. [1] It may be considered a form of paravirtualization when combined with device emulation. [7] This technique allows sharing GPU resources between multiple guests and the host when the GPU does not support hardware-assisted virtualization. It is conceptually simple to implement, but it has several disadvantages: [1]
Hypervisors usually use shared memory between guest and host to maximize performance and minimize latency. Using a network interface instead (a common approach in distributed rendering), third-party software can add support for specific APIs (e.g. rCUDA [8] for CUDA) or add support for typical APIs (e.g. VMGL [9] for OpenGL) when it is not supported by the hypervisor's software package, although network delay and serialization overhead may outweigh the benefits.
Technology | Direct3D | OpenGL | Vulkan | OpenCL | DXVA |
---|---|---|---|---|---|
VMware Virtual Shared Graphics Acceleration (vSGA) | 11 | 4.3 [12] | Yes | No | No |
Parallels Desktop for Mac 3D acceleration | 11 [upper-alpha 1] | 3.3 | No | No | No |
Hyper-V RemoteFX vGPU | 12 | 4.4 | No | 1.1 | No |
VirtualBox Guest Additions 3D driver | 8/9 | 2.1 | No | No | No |
Thincast Workstation - Virtual 3D | 12.1 | No | Yes | No | No |
QEMU/KVM with Virgil 3D | No | 4.3 | Planned | No | No |
In fixed pass-through or GPU pass-through (a special case of PCI pass-through), a GPU is accessed directly by a single virtual machine exclusively and permanently. This technique achieves 96–100% of native performance [3] and high fidelity, [1] but the acceleration provided by the GPU cannot be shared between multiple virtual machines. As such, it has the lowest consolidation ratio and the highest cost, as each graphics-accelerated virtual machine requires an additional physical GPU. [1]
The following software technologies implement fixed pass-through:
VirtualBox removed support for PCI pass-through in version 6.1.0. [34]
For certain GPU models, Nvidia and AMD video card drivers attempt to detect the GPU is being accessed by a virtual machine and disable some or all GPU features. [35] NVIDIA has recently changed virtualization rules for consumer GPUs by disabling the check in GeForce Game Ready driver 465.xx and later. [36]
For NVIDIA, various architectures of desktop and laptop consumer GPUs can be passed through in various ways. For desktop graphics cards, passthrough can be done via the KVM using either the legacy or UEFI BIOS configuration via SeaBIOS and OVMF, respectively.
For desktops, most graphics cards can be passed through, although for graphics cards with the Pascal architecture or older, the VBIOS of the graphics card must be passed through in the virtual machine if the GPU is used to boot the host. [37]
For laptops, the NVIDIA driver checks for the presence of a battery via ACPI, and without a battery, an error will be returned. To avoid this, an acpitable created from text converted into Base64 is required to spoof a battery and bypass the check. [37]
For the laptop graphics cards that are Pascal and older, passthrough varies widely on the configuration of the graphics card. For laptops that do not have NVIDIA Optimus, such as the MXM variants, passthrough can be achieved through traditional methods. For laptops that have NVIDIA Optimus on as well as rendering through the CPU's integrated graphics framebuffer as opposed to its own, the passthrough is more complicated, requiring a remote rendering display or service, the use of Intel GVT-g, as well as integrating the VBIOS into the boot configuration due to the VBIOS being present in the laptop's system BIOS as opposed to the GPU itself. For laptops that have a GPU with NVIDIA Optimus and have a dedicated framebuffer, the configurations may vary. If NVIDIA Optimus can be switched off, then passthrough is possible through traditional means. However, if Optimus is the only configuration, then it is most likely that the VBIOS is present in the laptop's system BIOS, requiring the same steps as the laptop rendering only on the integrated graphics framebuffer, but an external monitor is also possible. [38]
In mediated device pass-through or full GPU virtualization, the GPU hardware provides contexts with virtual memory ranges for each guest through IOMMU and the hypervisor sends graphical commands from guests directly to the GPU. This technique is a form of hardware-assisted virtualization and achieves near-native [lower-alpha 2] performance and high fidelity. If the hardware exposes contexts as full logical devices, then guests can use any API. Otherwise, APIs and drivers must manage the additional complexity of GPU contexts. As a disadvantage, there may be little isolation between virtual machines when accessing GPU resources. [1]
The following software and hardware technologies implement mediated pass-through:
While API remoting is generally available for current and older GPUs, mediated pass-through requires hardware support available only on specific devices.
Vendor | Technology | Dedicated graphics card families | Integrated GPU families | ||
---|---|---|---|---|---|
Server | Professional | Consumer | |||
Nvidia | vGPU [47] | GRID, Tesla | Quadro | No | — |
AMD | MxGPU [43] [48] | FirePro Server, Radeon Instinct | Radeon Pro | No | No |
Intel | GVT-g | — | — | — | Broadwell and newer |
GPU architectures are very complex and change quickly, and their internal details are often kept secret. It is generally not feasible to fully virtualize new generations of GPUs, only older and simpler generations. For example, PCem, a specialized emulator of the IBM PC architecture, can emulate a S3 ViRGE/DX graphics device, which supports Direct3D 3, and a 3dfx Voodoo2, which supports Glide, among others. [49]
When using a VGA or an SVGA virtual display adapter, [50] [51] [52] the guest may not have 3D graphics acceleration, providing only minimal functionality to allow access to the machine via a graphics terminal. The emulated device may expose only basic 2D graphics modes to guests. The virtual machine manager may also provide common API implementations using software rendering to enable 3D graphics applications on the guest, albeit at speeds that may be low as 3% of hardware-accelerated native performance. [1] The following software technologies implement graphics APIs using software rendering:
OpenGL is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
A graphics processing unit (GPU) is a specialized electronic circuit initially designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
Xen is a free and open-source type-1 hypervisor, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. It was originally developed by the University of Cambridge Computer Laboratory and is now being developed by the Linux Foundation with support from Intel, Citrix, Arm Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and epam.
x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.
Mesa, also called Mesa3D and The Mesa 3D Graphics Library, is an open source implementation of OpenGL, Vulkan, and other graphics API specifications. Mesa translates these specifications to vendor-specific graphics hardware drivers.
Platform virtualization software, specifically emulators and hypervisors, are software packages that emulate the whole physical computer machine, often providing multiple virtual machines on one physical platform. The table below compares basic information about platform virtualization hypervisors.
A free and open-source graphics device driver is a software stack which controls computer-graphics hardware and supports graphics-rendering application programming interfaces (APIs) and is released under a free and open-source software license. Graphics device drivers are written for specific hardware to work within a specific operating system kernel and to support a range of APIs used by applications to access the graphics hardware. They may also control output to the display if the display driver is part of the graphics hardware. Most free and open-source graphics device drivers are developed by the Mesa project. The driver is made up of a compiler, a rendering API, and software which manages access to the graphics hardware.
In computing, CUDA is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations. CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
In computing, virtualization (v12n) is a series of technologies that allows dividing of physical computing resources into a series of virtual machines, operating systems, processes or containers.
X-Video Bitstream Acceleration (XvBA), designed by AMD Graphics for its Radeon GPU and APU, is an arbitrary extension of the X video extension (Xv) for the X Window System on Linux operating-systems. XvBA API allows video programs to offload portions of the video decoding process to the GPU video-hardware. Currently, the portions designed to be offloaded by XvBA onto the GPU are currently motion compensation (MC) and inverse discrete cosine transform (IDCT), and variable-length decoding (VLD) for MPEG-2, MPEG-4 ASP, MPEG-4 AVC (H.264), WMV3, and VC-1 encoded video.
Video Decode and Presentation API for Unix (VDPAU) is a royalty-free application programming interface (API) as well as its implementation as free and open-source library distributed under the MIT License. VDPAU is also supported by Nvidia.
Intel Graphics Technology (GT) is the collective name for a series of integrated graphics processors (IGPs) produced by Intel that are manufactured on the same package or die as the central processing unit (CPU). It was first introduced in 2010 as Intel HD Graphics and renamed in 2017 as Intel UHD Graphics.
Nvidia Optimus is a computer GPU switching technology created by Nvidia which, depending on the resource load generated by client software applications, will seamlessly switch between two graphics adapters within a computer system in order to provide either maximum performance or minimum power draw from the system's graphics rendering hardware.
XenClient is a discontinued desktop virtualization product developed by Citrix. It runs virtual desktops on endpoint devices. The product reached end of-life in December 2016. XenClient runs both operating system and applications locally in the end users device, without the need for a connection to a data center, which is why it is used in environments with limited connectivity, disconnected operation on laptops, and other scenarios where local execution is desired while keeping management centralized.
In computer security, virtual machine (VM) escape is the process of a program breaking out of the virtual machine on which it is running and interacting with the host operating system. In theory, a virtual machine is a "completely isolated guest operating system installation within a normal host operating system". In 2008, a vulnerability in VMware discovered by Core Security Technologies made VM escape possible on VMware Workstation 6.0.2 and 5.5.4. A fully working exploit labeled Cloudburst was developed by Immunity Inc. for Immunity CANVAS. Cloudburst was presented in Black Hat USA 2009.
bhyve is a type-2 (hosted) hypervisor initially written for FreeBSD. It can also be used on a number of illumos based distributions including SmartOS, OpenIndiana, and OmniOS. A port of bhyve to macOS called xhyve is also available.
Metal is a low-level, low-overhead hardware-accelerated 3D graphic and compute shader API created by Apple, debuting in iOS 8. Metal combines functions similar to OpenGL and OpenCL in one API. It is intended to improve performance by offering low-level access to the GPU hardware for apps on iOS, iPadOS, macOS, and tvOS. It can be compared to low-level APIs on other platforms such as Vulkan and DirectX 12.
Vulkan is a low-level, low-overhead cross-platform API and open standard for 3D graphics and computing. It was intended to address the shortcomings of OpenGL, and allow developers more control over the GPU. It is designed to support a wide variety of GPUs, CPUs and operating systems, and it is also designed to work with modern multi-core CPUs.
ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP, OpenMP, and OpenCL.
Harvester is a cloud native hyper-converged infrastructure (HCI) open source software. Harvester was announced in 2020 by SUSE.