This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
An embedded hypervisor is a hypervisor that supports the requirements of embedded systems.
The requirements for an embedded hypervisor are distinct from hypervisors targeting server and desktop applications. An embedded hypervisor is designed into the embedded device from the outset, rather than loaded subsequent to device deployment. While desktop and enterprise environments use hypervisors to consolidate hardware and isolate computing environments from one another, in an embedded system, the various components typically function collectively to provide the device's functionality. Mobile virtualization overlaps with embedded system virtualization, and shares some use cases.
Typical attributes of embedded virtualization include efficiency, security, communication, isolation and real-time capabilities. [1]
Software virtualization has been a major topic in the enterprise space since the late 1960s, but only since the early 2000s has its use appeared in embedded systems. The use of virtualization and its implementation in the form of a hypervisor in embedded systems are very different from enterprise applications. An effective implementation of an embedded hypervisor must deal with a number of issues specific to such applications. These issues include the highly integrated nature of embedded systems, the requirement for isolated functional blocks within the system to communicate rapidly, the need for real-time/deterministic performance, the resource-constrained target environment and the wide range of security and reliability requirements.
A hypervisor provides one or more software virtualization environments in which other software, including operating systems, can run with the appearance of full access to the underlying system hardware, where in fact such access is under the complete control of the hypervisor. These virtual environments are called virtual machines (VM)s, and a hypervisor will typically support multiple VMs managed simultaneously.
Hypervisors are generally classed as either type 1 or type 2, depending on whether the hypervisor runs exclusively in supervisor mode or privileged mode (type 1) or is itself hosted by an operating system as a regular application (type 2).
Type 1 hypervisors manage key system resources required to maintain control over the virtual machines, and facilitate a minimal trusted computing base (TCB). Type 2 hypervisors typically run as an application within a more general purpose operating system, relying on services of the OS to manage system resources. Nowadays kernel extensions are often loaded to take advantage of hardware with virtualization support.
An embedded hypervisor is most often a type 1 hypervisor which supports the requirements of embedded systems development. See references [2] and [3] for a more detailed discussion.
These requirements are summarized below.
An embedded hypervisor typically provides multiple VMs, each of which emulates a hardware platform on which the virtualised software executes. The VM may emulate the underlying native hardware, in which case embedded code that runs on the real machine will run on the virtual machine and vice versa. An emulation of the native hardware is not always possible or desired, and a virtual platform may be defined instead.
When a VM provides a virtual platform, guest software has to be ported to run in this environment, however since a virtual platform can be defined without reliance on the native hardware, guest software supporting a virtual platform can be run unmodified across various distinct hardware platforms supported by the hypervisor.
Embedded hypervisors employ either paravirtualization or use virtualization features of the underlying CPU. Paravirtualization is required in cases where the hardware does not assist, and involves often extensive modifications to core architecture support core of guest kernels. Emulation of hardware at the register level is rarely seen in embedded hypervisors as this is very complex and slow. The custom nature of embedded systems means that the need to support unmodified binary-only guest software which require these techniques is rare.
The size and efficiency of the implementation is also an issue for an embedded hypervisor, as embedded systems are often much more resource constrained than desktop and server platforms. It is also desirable for the hypervisor to maintain, as closely as possible, the native speed, real-time response and determinism and power efficiency of the underlying hardware platform.
Implementations for embedded systems applications have most commonly been based on small microkernel and separation kernel designs, with virtualization built-in as an integral capability. This was introduced with PikeOS in 2005. [4] Examples of these approaches have been produced by companies such as Open Kernel Labs (microkernel followed by a separation kernel) and LynuxWorks (separation kernel). VirtualLogix appears to take the position that an approach based on a dedicated Virtual Machine Monitor (VMM) would be even smaller and more efficient. This issue is the subject of some ongoing debate. [5] [6] [7] However, the main point at issue is the same on all sides of the discussion – the speed and size of the implementation (for a given level of functionality) are of major importance. For example: " ... hypervisors for embedded use must be real-time capable, as well as resource-miserly."
Embedded systems are typically highly resource constrained due to cost and technical limitations of the hardware. It is therefore important for an embedded hypervisor to be as efficient as possible. The microkernel and separation kernel based designs allow for small and efficient hypervisors. Thus embedded hypervisors usually have a memory footprint from several tens to several hundred kilobytes, depending on the efficiency of the implementation and the level of functionality provided. An implementation requiring several megabytes of memory (or more) is generally not acceptable.
With the small TCB of a type 1 embedded hypervisor, the system can be made highly secure & reliable. [8] Standard software-engineering techniques, such as code inspections and systematic testing, can be used to reduce the number of bugs in such a small code base to a tiny fraction of the defects that must be expected for a hypervisor and guest OS combination that may be 100,000–300,000 lines in total. [9]
One of the most important functions required in an embedded hypervisor is a secure message-passing mechanism, which is needed to support real-time communication between processes. In the embedded environment, a system will typically have a number of closely coupled tasks, some of which may require secure isolation from each other. In a virtualized environment, the embedded hypervisor will support and enforce this isolation between multiple VMs. These VMs will therefore require access to a mechanism that provides low-latency communication between the tasks.
An inter-process communication (IPC) mechanism can be used to provide these functions, as well as invoking all system services, and implemented in a manner which ensures that the desired level of VM isolation is maintained. Also, due to its significant impact on system performance, such an IPC mechanism should be highly optimised for minimal latency. [10]
An embedded hypervisor needs to be in complete control of system resources, including memory accesses, to ensure that software cannot break out of the VM. A hypervisor therefore requires the target CPU to provide memory management support (typically using an MMU). Many embedded processors including such as ARM, MIPS and PowerPC have followed desktop and server chip vendors in adding hardware support for virtualization. There are still a large proportion of embedded processors however which do not provide such support and a hypervisor supporting paravirtualization is required.
ARM processors are notable in that most of their application class processor designs support a technology called ARM TrustZone, which provides essentially hardware support for one privileged and one unprivileged VM. Normally a minimal Trusted Execution Environment (TEE) OS is run in the Secure World and a native kernel running in the Non-secure World.
Some of the most common use cases for an embedded hypervisor are: [11] [12]
1. OS independence
Designers of embedded systems may have many hardware drivers and system services which are specific to a target platform. If support for more than one OS is required on the platform, either concurrently or consecutively using a common hardware design, an embedded hypervisor can greatly simplify the task. Such drivers and system services can be implemented just once for the virtualized environment; these services are then available to any hosted OS. This level of abstraction also allows the embedded developer to implement or change a driver or service in either hardware or software at any point, without this being apparent to the hosted OS.
2. Support for multiple operating systems on a single processor
Typically this is used to run a real-time operating system (RTOS) for low-level real-time functionality (such as the communication stack) while at the same time running a general purpose OS, (GPOS) like Linux or Windows, to support user applications, such as a web browser or calendar. The objective might be to upgrade an existing design without the added complexity of a second processor, or simply to minimize the bill of materials (BoM).
3. System security
An embedded hypervisor is able to provide secure encapsulation for any subsystem defined by the developer, so that a compromised subsystem cannot interfere with other subsystems. For example, an encryption subsystem needs to be strongly shielded from attack to prevent leaking the information the encryption is supposed to protect. As the embedded hypervisor can encapsulate a subsystem in a VM, it can then enforce the required security policies for communication to and from that subsystem.
4. System reliability
The encapsulation of a subsystem components into a VM ensures that failure of any subsystem cannot impact other subsystems. This encapsulation keeps faults from propagating from a subsystem in one VM to a subsystem in another VM, improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection. This can be particularly important for embedded device drivers, as this is where the highest density of fault conditions is seen to occur, and is thus the most common cause of OS failure and system instability. It also allows the encapsulation of operating systems that were not necessarily built to the reliability standards demanded of the new system design.
5. Dynamic update of system software
Subsystem software or applications can be securely updated and tested for integrity, by downloading to a secure VM before “going live” in an executing system. Even if this process then fails, the system can revert to its former state by restarting the original software subsystem/application, without halting system operation.
6. Legacy code re-use
Virtualization allows legacy embedded code to be used with the OS environment it has been developed and validated with, while freeing the developer to use a different OS environment in a separate VM for new services and applications. Legacy embedded code, written for a particular system configuration may assume exclusive control of all system resources of memory, I/O and processor. This code base can be re-used unchanged on alternative system configurations of I/O and memory through the use of a VM to present a resource map and functionality that is consistent with the original system configuration, effectively de-coupling the legacy code from the specifics of a new or modified hardware design.
Where access to the operating system source code is available, paravirtualization is commonly used to virtualize the OS’s on processors without hardware virtualization support, and thus the applications supported by the OS can also run unmodified and without re-compilation in new hardware platform designs.
Even without source access, legacy binary code can be executed in systems running on processors with hardware virtualization support such as the AMD-V, Intel VT technologies and the latest ARM processors with virtualization support. [13] The legacy binary code could run completely unmodified in a VM with all resource mapping handled by the embedded hypervisor, assuming the system hardware provides equivalent functionality.
7. IP protection
Valuable proprietary IP may need protection from theft or misuse when an embedded platform is being shipped for further development work by (for example) an OEM customer. An embedded hypervisor makes it possible to restrict access by other system software components to a specific part of the system containing IP that needs to be protected.
8. Software license segregation
Software IP operating under one licensing scheme can be separated from other software IP operating under a different scheme. For example, the embedded hypervisor can provide an isolated execution environment for proprietary software sharing the processor with open source software subject to the GPL. [14]
9. Migration of applications from uni-core to multi-core systems
As new processors utilise multi-core architectures to increase performance, the embedded hypervisor can manage the underlying architecture and present a uni-processor environment to legacy applications and operating systems while efficiently using the new multiprocessor system design. In this way a change in hardware environment does not require a change to the existing software.
In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).
In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:
L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, Portable Operating System Interface (POSIX) compliant types.
Xen is a free and open-source type-1 hypervisor, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. It was originally developed by the University of Cambridge Computer Laboratory and is now being developed by the Linux Foundation with support from Intel, Citrix, Arm Ltd, Huawei, AWS, Alibaba Cloud, AMD, Bitdefender and epam.
x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.
A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system–level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
In computing, paravirtualization or para-virtualization is a virtualization technique that presents a software interface to the virtual machines which is similar, yet not identical, to the underlying hardware–software interface.
In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.
PikeOS is a commercial hard real-time operating system (RTOS) which features a separation kernel-based hypervisor. This hypervisor supports multiple logical partition types for various operating systems (OS) and applications, each referred to as a GuestOS. PikeOS is designed to facilitate the development of certifiable smart devices for the Internet of Things (IoT) by adhering to standards of quality, safety, and security across different industries. In instances where memory management units (MMU) are not present but memory protection units (MPU) are available on controller-based systems, PikeOS for MPU is designed for critical real-time applications and provides up-to-standard safety and security.
The following is a timeline of virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.
VMware ESXi is an enterprise-class, type-1 hypervisor developed by VMware, a subsidiary of Broadcom, for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that is installed on an operating system (OS); instead, it includes and integrates vital OS components, such as a kernel.
In computer science, full virtualization (fv) is a modern virtualization technique developed in late 1990s. It is different from simulation and emulation. Virtualization employs techniques that can create instances of a virtual environment, as opposed to simulation, which models the environment; and emulation, which replicates the target environment with certain kinds of virtual environments called emulation environments for virtual machines. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of full virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.
Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.
LynxSecure is a least privilege real-time separation kernel hypervisor from Lynx Software Technologies designed for safety and security critical applications found in military, avionic, industrial, and automotive markets.
The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.
Open Kernel Labs is a privately owned company that develops microkernel-based hypervisors and operating systems for embedded systems. The company was founded in 2006 by Steve Subar and Gernot Heiser as a spinout from NICTA. It was headquartered in Chicago, while research and development was located in Sydney, Australia. The company was acquired by General Dynamics in September 2012.
Linux on IBM Z or Linux on zSystems is the collective term for the Linux operating system compiled to run on IBM mainframes, especially IBM Z / IBM zSystems and IBM LinuxONE servers. Similar terms which imply the same meaning are Linux/390, Linux/390x, etc. The three Linux distributions certified for usage on the IBM Z hardware platform are Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu.
Qubes OS is a security-focused desktop operating system that aims to provide security through isolation. Isolation is provided through the use of virtualization technology. This allows the segmentation of applications into secure virtual machines called qubes. Virtualization services in Qubes OS are provided by the Xen hypervisor.
In computing, a system virtual machine is a virtual machine (VM) that provides a complete system platform and supports the execution of a complete operating system (OS). These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use, or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness, or both. A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine".
Genode is a free and open-source software operating system (OS) framework consisting of a microkernel abstraction layer and a set of user space components. The framework is notable as one of the few open-source operating systems not derived from a proprietary OS, such as Unix. The characteristic design philosophy is that a small trusted computing base is of primary concern in a security-oriented OS.