Cross-platform virtualization

Last updated

Cross-platform virtualization is a form of computer virtualization that allows software compiled for a specific instruction set and operating system to run unmodified on computers with different CPUs and/or operating systems, through a combination of dynamic binary translation and operating system call mapping.

Since the software runs on a virtualized equivalent of the original computer, it does not require recompilation or porting, thus saving time and development resources. However, the processing overhead of binary translation and call mapping imposes a performance penalty, when compared to natively-compiled software. For this reason, cross-platform virtualization may be used as a temporary solution until resources are available to port the software. Alternatively, cross-platform virtualization may be used to support legacy code, which running on a newer and faster machine still maintains adequate performance even with virtualization overhead.

By creating an abstraction layer capable of running software compiled for a different computer system, cross-platform virtualization characterizes the Popek and Goldberg virtualization requirements outlined by Gerald J. Popek and Robert P. Goldberg in their 1974 article "Formal Requirements for Virtualizable Third Generation Architectures". [1] Cross-platform virtualization is distinct from simple emulation and binary translation - which involve the direct translation of one instruction set to another - since the inclusion of operating system call mapping provides a more complete virtualized environment. Cross-platform virtualization is also complementary to server virtualization and desktop virtualization solutions, since these are typically constrained to a single instruction set, such as x86 or Power ISA. Modern variants of cross-platform virtualisation may employ hardware acceleration techniques [2] to offset some of the cost incurred in the guest-to-host system translation.

See also

Related Research Articles

In computing, a compiler is a computer program that translates computer code written in one programming language into another language. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language to create an executable program.

In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:

<span class="mw-page-title-main">Interpreter (computing)</span> Program that executes source code without a separate compilation step

In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:

  1. Parse the source code and perform its behavior directly;
  2. Translate source code into some efficient intermediate representation or object code and immediately execute that;
  3. Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.

In computing, cross-platform software is computer software that is designed to work in several computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the interpreters or run-time packages are common or standard components of all supported platforms.

L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, Portable Operating System Interface (POSIX) compliant types.

In software engineering, porting is the process of adapting software for the purpose of achieving some form of execution in a computing environment that is different from the one that a given program was originally designed for. The term is also used when software/hardware is changed to make them usable in different environments.

In computing, binary translation is a form of binary recompilation where sequences of instructions are translated from a source instruction set to the target instruction set. In some cases such as instruction set simulation, the target instruction set may be the same as the source instruction set, providing testing and debugging features such as instruction trace, conditional breakpoints and hot spot detection.

In computing, an opcode is the portion of a machine language instruction that specifies the operation to be performed. Beside the opcode itself, most instructions also specify the data they will process, in the form of operands. In addition to opcodes used in the instruction set architectures of various CPUs, which are hardware devices, they can also be used in abstract computing machines as part of their byte code specifications.

The Popek and Goldberg virtualization requirements are a set of conditions sufficient for a computer architecture to support system virtualization efficiently. They were introduced by Gerald J. Popek and Robert P. Goldberg in their 1974 article "Formal Requirements for Virtualizable Third Generation Architectures". Even though the requirements are derived under simplifying assumptions, they still represent a convenient way of determining whether a computer architecture supports efficient virtualization and provide guidelines for the design of virtualized computer architectures.

x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.

In computing, hardware-assisted virtualization is a platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. A full virtualization is used to emulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system effectively executes in complete isolation. Hardware-assisted virtualization was added to x86 processors in 2005, 2006 and 2010 (respectively).

Gerald John "Jerry" Popek was an American computer scientist, known for his research on operating systems and virtualization.

<span class="mw-page-title-main">Full virtualization</span> Computing technique involving instances of an environment

In computer science, full virtualization (fv) is a modern virtualization technique developed in late 1990s. It is different from simulation and emulation. Virtualization employs techniques that can create instances of a virtual environment, as opposed to simulation, which models the environment; and emulation, which replicates the target environment with certain kinds of virtual environments called emulation environments for virtual machines. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of full virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.

Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.

Binary-code compatibility is a property of a computer system, meaning that it can run the same executable code, typically machine code for a general-purpose computer Central processing unit (CPU), that another computer system can run. Source-code compatibility, on the other hand, means that recompilation or interpretation is necessary before the program can be run on the compatible system.

<span class="mw-page-title-main">Emulator</span> System allowing a device to imitate another

In computing, an emulator is hardware or software that enables one computer system to behave like another computer system. An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of a computer program in an electronic device to emulate another program or device.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

Robert P. Goldberg was an American computer scientist, known for his research on operating systems and virtualization.

A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in McKeeman (1967) and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in Ditzel & Patterson (1980).

In computing, a system virtual machine is a virtual machine (VM) that provides a complete system platform and supports the execution of a complete operating system (OS). These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use, or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness, or both. A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine".

References

  1. Gerald J. Popek and Robert P. Goldberg (1974). "Formal Requirements for Virtualizable Third Generation Architectures". Communications of the ACM. 17 (7): 412–421. doi: 10.1145/361011.361073 . S2CID   53302633.
  2. Spink, Tom; Wagstaff, Harry; Franke, Björn (2016-12-28). "Hardware-Accelerated Cross-Architecture Full-System Virtualization". ACM Transactions on Architecture and Code Optimization. 13 (4): 36. doi: 10.1145/2996798 . ISSN   1544-3566.