This article needs additional citations for verification .(March 2010) |
Original author(s) | Sun Microsystems |
---|---|
Developer(s) | illumos and Oracle |
Initial release | January 2005 |
Written in | C |
Operating system | Oracle Solaris |
Platform | SPARC, x86 |
Available in | English |
Type | OS-level virtualization |
License | CDDL, Proprietary |
Website | oracle |
Solaris Containers (including Solaris Zones) is an implementation of operating system-level virtualization technology for x86 and SPARC systems, first released publicly in February 2004 in build 51 beta of Solaris 10, and subsequently in the first full release of Solaris 10, 2005. It is present in illumos (formerly OpenSolaris) distributions, such as OpenIndiana, SmartOS, Tribblix and OmniOS, and in the official Oracle Solaris 11 release.
A Solaris Container is the combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance. By consolidating multiple sets of application services onto one system and by placing each into isolated virtual server containers, system administrators can reduce cost and provide most of the same protections of separate machines on a single machine. [1]
The name of this technology changed during development and the pre-launch public events. Before the launch of Solaris Zones in 2005, a Solaris Container was any type of workload constrained by Solaris resource management features. The latter had been a separate software package in earlier history. By 2007 the term Solaris Containers came to mean a Solaris Zone combined with resource management controls.
Later, there was a gradual move such that Solaris Containers specifically referred to non-global zones, with or without additional Resource Management. Zones hosted by a global zone are known as "non-global zones" but are sometimes just called "zones". The term "local zone" is specifically discouraged, since in this usage "local" is not an antonym of "global". The global zone has visibility of all resources on the system, whether these are associated with the global zone or a non-global zone. Unless otherwise noted, "zone" will refer to non-global zones in this article.
To simplify terminology, Oracle dropped the use of the term Container in Solaris 11, and reverted to the use of the term Solaris Zone irrespective of the use of resource management controls.
Each zone has its node name, access to virtual or physical network interfaces, [2] and storage assigned to it; there is no requirement for a zone to have any minimum amount of dedicated hardware other than the disk storage necessary for its unique configuration. Specifically, it does not require a dedicated CPU, memory, physical network interface or HBA, although any of these can be allocated specifically to one zone. [3]
Each zone has a security boundary surrounding it, preventing a process associated with one zone from interacting with or observing processes in other zones. Each zone can be configured with its own separate user list. The system automatically manages user ID conflicts; that is, two zones on a system could have a user ID 10000 defined, and each would be mapped to its own unique global identifier. [4]
A zone can be in one of the following states:
Some programs cannot be executed from within a non-global zone; typically this is because the application requires privileges that cannot be granted within a container. As a zone does not have its own separate kernel (in contrast to a hardware virtual machine), applications that require direct manipulation of kernel features, such as the ability to directly read or alter kernel memory space, may not work inside of a container.
Zones induce a very low overhead on CPU and memory. Most types of zones share the global zone's virtual address space. A zone can be assigned to a resource pool (processor set plus scheduling class) to guarantee certain usage, can be capped at a fixed compute capacity ("capped CPU") or can be given shares via fair-share scheduling. [5]
Currently a maximum of 8191 non-global zones can be created within a single operating system instance. "Sparse Zones", in which most filesystem content is shared with the global zone, can take as little as 50 MB of disk space. "Whole Root Zones", in which each zone has its own copy of its operating system files, may occupy anywhere from several hundred megabytes to several gigabytes, depending on installed software. The 8191 limits arise from the limit of 8,192 loopback connections per Solaris instance. Each zone needs a loopback connection. The global zone gets one, leaving 8,191 for the non-global zones.
Even with Whole Root Zones, disk space requirements can be negligible if the zone's OS file system is a ZFS clone of the global zone image, since only the blocks different from a snapshot image need to be stored on disk; this method also makes it possible to create new zones in a few seconds.
Although all zones on the system share a common kernel, an additional feature set has been added called branded zones (BrandZ for short). This allows individual zones to behave in a manner other than the default brand of the global zone. The existing brands (October 2009) can be grouped into two categories:
The brand for a zone is set at the time the zone is created. The second category is implemented with interposition points within the OS kernel that can be used to change the behavior of syscalls, process loading, thread creation, and other elements.
For the 'lx' brand, libraries from Red Hat 3 or an equivalent distribution such as CentOS are required to complete the emulated environment.
The Solaris operating system provides man pages for Solaris Containers by default; more detailed documentation can be found at various on-line technical resources.
The first published document and hands-on reference for Solaris Zones was written in February 2004 by Dennis Clarke at Blastwave, providing the essentials to getting started. This document was greatly expanded upon by Brendan Gregg in July 2005. [8] The Solaris 8 and Solaris 9 Containers were documented in detail by Dennis Clarke at Blastwave again in April 2008. The Blastwave Solaris 8 and Solaris 9 Containers document was very early in the release cycle of the Solaris Containers technology and the actions and implementation at Blastwave resulted in a followup by Sun Microsystems marketing. The book Oracle Solaris 10 System Virtualization Essentials written by Jeff Victor, et al., offers feature details and best practices. More extensive documentation may be found at the Oracle documentation site. [9]
As of Solaris 10 10/08, Branded Zones are supported on the sun4us architecture (Fujitsu PRIMEPOWER servers) through packages FJSVs8brandr and FJSVs9brandr. [10]
Solaris is a proprietary Unix operating system originally developed by Sun Microsystems. After the Sun acquisition by Oracle in 2010, it was renamed Oracle Solaris.
These tables provide a comparison of operating systems, of computer devices, as listing general and technical information for a number of widely used and currently available PC or handheld operating systems. The article "Usage share of operating systems" provides a broader, and more general, comparison of operating systems that includes servers, mainframes and supercomputers.
OpenSolaris is a discontinued open-source computer operating system based on Solaris and created by Sun Microsystems. It was also, perhaps confusingly, the name of a project initiated by Sun to build a developer and user community around the eponymous operating system software.
A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system–level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers, zones, virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels, or jails. Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.
Nexenta OS, officially known as the Nexenta Core Platform, is a discontinued computer operating system based on OpenSolaris and Ubuntu that runs on IA-32- and x86-64-based systems. It emerged in fall 2005, after Sun Microsystems started the OpenSolaris project in June of that year. Nexenta Systems, Inc. initiated the project and sponsored its development. Nexenta OS version 1.0 was released in February 2008.
OpenVZ is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.
The following is a timeline of virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.
Oracle Linux is a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006. It is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle's. It is also used by Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata and others.
Oracle VM VirtualBox is a hosted hypervisor for x86 virtualization developed by Oracle Corporation. VirtualBox was originally created by InnoTek Systemberatung GmbH, which was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010.
Logical Domains is the server virtualization and partitioning technology for SPARC V9 processors. It was first released by Sun Microsystems in April 2007. After the Oracle acquisition of Sun in January 2010, the product has been re-branded as Oracle VM Server for SPARC from version 2.0 onwards.
Solaris network virtualization and resource control is a set of features originally developed by Sun Microsystems as the OpenSolaris Crossbow umbrella project, providing an internal network virtualization and quality of service framework within the Solaris Operating System.
Sun xVM was a product line from Sun Microsystems that addressed virtualization technology on x86 platforms. One component was discontinued before the Oracle acquisition of Sun; the remaining two continue under Oracle branding.
Oracle VM Server for x86 is the server virtualization offering from Oracle Corporation. Oracle VM Server for x86 incorporates the free and open-source Xen hypervisor technology, supports Windows, Linux, and Solaris guests and includes an integrated Web based management console. Oracle VM Server for x86 features fully tested and certified Oracle Applications stack in an enterprise virtualization environment.
Illumos is a partly free and open-source Unix operating system. It is based on OpenSolaris, which was based on System V Release 4 (SVR4) and the Berkeley Software Distribution (BSD). Illumos comprises a kernel, device drivers, system libraries, and utility software for system administration. This core is now the base for many different open-sourced Illumos distributions, in a similar way in which the Linux kernel is used in different Linux distributions.
Linux Containers (LXC) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
Google Compute Engine (GCE) is the infrastructure as a service (IaaS) component of Google Cloud Platform which is built on the global infrastructure that runs Google's search engine, Gmail, YouTube and other services. Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched from the standard images or custom images created by users. Google Compute Engine can be accessed via the Developer Console, RESTful API or command-line interface (CLI).
SmartOS is a free and open-source SVR4 hypervisor based on the UNIX operating system that combines OpenSolaris technology with bhyve and KVM virtualization. Its core kernel contributes to the illumos project. It features several technologies: Crossbow, DTrace, bhyve, KVM, ZFS, and Zones. Unlike other illumos distributions, SmartOS employs NetBSD pkgsrc package management. SmartOS is designed to be particularly suitable for building clouds and generating appliances. It was originally developed for and by Joyent, who announced in April 2022 that they had sold their business supporting and developing of Triton Datacenter and SmartOS to MNX Solutions. It is open-source and free for anyone to use.
In computing, a system virtual machine is a virtual machine (VM) that provides a complete system platform and supports the execution of a complete operating system (OS). These usually emulate an existing architecture, and are built with the purpose of either providing a platform to run programs where the real hardware is not available for use, or of having multiple instances of virtual machines leading to more efficient use of computing resources, both in terms of energy consumption and cost effectiveness, or both. A VM was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine".