OpenNebula

Last updated
OpenNebula
Developer(s) OpenNebula Systems, OpenNebula Community
Initial releaseJuly 24, 2008;16 years ago (2008-07-24)
Stable release
6.10.1 [1] / 7 November 2024;26 days ago (2024-11-07)
Repository
Written in C++, Ruby, Shell script, lex, yacc, JavaScript
Operating system Linux
Platform Hypervisors (VMware vCenter, KVM, LXD/LXC, and AWS Firecracker)
Available inEnglish, Czech, French, Slovak, Spanish, Chinese, Thai, Turkish, Portuguese, Turkish, Russian, Dutch, Estonian, Japanese
Type Cloud computing
License Apache License version 2
Website opennebula.io

OpenNebula is an open source cloud computing platform for managing heterogeneous data center, public cloud and edge computing infrastructure resources. OpenNebula manages on-premises and remote virtual infrastructure to build private, public, or hybrid implementations of infrastructure as a service (IaaS) and multi-tenant Kubernetes deployments. The two primary uses of the OpenNebula platform are data center virtualization and cloud deployments based on the KVM hypervisor, LXD/LXC system containers, and AWS Firecracker microVMs. [2] The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing VMware infrastructure. In early June 2020, OpenNebula announced the release of a new Enterprise Edition for corporate users, along with a Community Edition. [3] OpenNebula CE is free and open-source software, released under the Apache License version 2. OpenNebula CE comes with free access to patch releases containing critical bug fixes but with no access to the regular EE maintenance releases. Upgrades to the latest minor/major version is only available for CE users with non-commercial deployments or with significant open source contributions to the OpenNebula Community. [4] OpenNebula EE is distributed under a closed-source license and requires a commercial Subscription. [5]

Contents

History

The OpenNebula Project was started as a research venture in 2005 by Ignacio M. Llorente and Ruben S. Montero. The first public release of the software occurred in 2008. The goals of the research were to create efficient solutions[ buzzword ] for managing virtual machines on distributed infrastructures. It was also important that these solutions[ buzzword ] had the ability to scale at high levels. Open-source development and an active community of developers have since helped mature the project. As the project matured it began to become more and more adopted and in March 2010 the primary writers of the project founded C12G Labs, now known as OpenNebula Systems, which provides value-added professional services to enterprises adopting or utilizing OpenNebula.

Description

OpenNebula orchestrates storage, network, virtualization, monitoring, and security [6] technologies to deploy multi-tier services (e.g. compute clusters [7] [8] ) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...". [9]

The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (VMware vCenter, KVM, LXD/LXC and AWS Firecracker), and can accommodate multiple hardware and software combinations in a data center. [10]

OpenNebula is sponsored by OpenNebula Systems (formerly C12G).

OpenNebula is widely used by a variety of industries, including cloud providers, telecommunication, information technology services, government, banking, gaming, media, hosting, supercomputing, research laboratories, and international research projects[ citation needed ].

Development

Major upgrades generally occur every 3-5 years and each upgrade generally has 3-5 updates. The OpenNebula project is mainly open-source and possible thanks to the active community of developers and translators supporting the project. Since version 5.12 the upgrade scripts are under a closed source license, which makes upgrading between versions impossible without a subscription unless you can prove you are operating a non-profit cloud or made a significant contribution to the project.

Release History

Internal architecture

Basic components

OpenNebula Internal Architecture OpenNebula Internal Architecture.png
OpenNebula Internal Architecture
  1. Underlying of physical network infrastructure.
  2. The logical address space available (IPv4, IPv6, dual stack).
  3. Context attributes (e.g. net mask, DNS, gateway). OpenNebula also comes with a Virtual Router appliance to provide networking services like DHCP, DNS etc.

Components and deployment model

OpenNebula Deployment Model OpenNebula Deployment Model.png
OpenNebula Deployment Model

The OpenNebula Project's deployment model resembles classic cluster architecture which utilizes

Front-end machine

The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other machines in the cluster. The master node also provides the mechanisms to manage the entire system. This includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which gathers information such as host status, performance, and capacity use. The system is highly scalable and is only limited by the performance of the actual server.[ citation needed ]

Hypervisor enabled-hosts

The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported and used by default. Virtualization hosts are the physical machines that run the virtual machines and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with these hosts to take the actions needed by the master node.

Storage

OpenNebula Storage OpenNebula Storage.png
OpenNebula Storage

The datastores simply hold the base images of the Virtual Machines. The datastores must be accessible to the front-end; this can be accomplished by using one of a variety of available technologies such as NAS, SAN, or direct attached storage.

Three different datastore classes are included with OpenNebula, including system datastores, image datastores, and file datastores. System datastores hold the images used for running the virtual machines. The images can be complete copies of an original image, deltas, or symbolic links depending on the storage technology used. The image datastores are used to store the disk image repository. Images from the image datastores are moved to or from the system datastore when virtual machines are deployed or manipulated. The file datastore is used for regular files and is often used for kernels, ram disks, or context files.

Physical networks

Physical networks are required to support the interconnection of storage servers and virtual machines in remote locations. It is also essential that the front-end machine can connect to all the worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires a service network and an instance network. The instance network allows the virtual machines to connect across different hosts. The network subsystem of OpenNebula is easily customizable to allow easy adaptation to existing data centers.

See also

Related Research Articles

<span class="mw-page-title-main">Virtual machine</span> Software that emulates an entire computer

In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination of the two. Virtual machines differ and are organized by their function, shown here:

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

In computing, virtualization is the use of a computer to simulate another computer. The following is a chronological list of virtualization technologies.

<span class="mw-page-title-main">VMware ESXi</span> Enterprise-class, type-1 hypervisor for deploying and serving virtual computers

VMware ESXi is an enterprise-class, type-1 hypervisor developed by VMware, a subsidiary of Broadcom, for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that is installed on an operating system (OS); instead, it includes and integrates vital OS components, such as a kernel.

<span class="mw-page-title-main">Kernel-based Virtual Machine</span> Virtualization module in the Linux kernel

Kernel-based Virtual Machine (KVM) is a free and open-source virtualization module in the Linux kernel that allows the kernel to function as a hypervisor. It was merged into the mainline Linux kernel in version 2.6.20, which was released on February 5, 2007. KVM requires a processor with hardware virtualization extensions, such as Intel VT or AMD-V. KVM has also been ported to other operating systems such as FreeBSD and illumos in the form of loadable kernel modules.

Infrastructure as a service (IaaS) is a cloud computing service model where a cloud services vendor provides computing resources such as storage, network, servers, and virtualization. This service frees users from maintaining their own data center, but they must install and maintain the operating system and application software. Iaas provides users high-level APIs to control details of underlying network infrastructure such as backup, data partitioning, scaling, security and physical computing resources. Services can be scaled on-demand by the user. According to the Internet Engineering Task Force (IETF), such infrastructure is the most basic cloud-service model. IaaS can be hosted in a public cloud, a private cloud, or a hybrid cloud.

oVirt Free, open-source virtualization management platform

oVirt is a free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy-to-use web-based front-end with platform independent access. KVM on x86-64, PowerPC64 and s390x architecture are the only hypervisors supported, but there is an ongoing effort to support ARM architecture in a future releases.

<span class="mw-page-title-main">TurnKey Linux Virtual Appliance Library</span> Open-Source virtual appliance library

The TurnKey Linux Virtual Appliance Library is a free open-source software project which develops a range of Debian-based pre-packaged server software appliances. Turnkey appliances can be deployed as a virtual machine, in cloud computing services such as Amazon Web Services or installed in physical computers.

libvirt Management tool

libvirt is an open-source API, daemon and management tool for managing platform virtualization. It can be used to manage KVM, Xen, VMware ESXi, QEMU and other virtualization technologies. These APIs are widely used in the orchestration layer of hypervisors in the development of a cloud-based solution.

Temporal isolation or performance isolation among virtual machine (VMs) refers to the capability of isolating the temporal behavior of multiple VMs among each other, despite them running on the same physical host and sharing a set of physical resources such as processors, memory, and disks.

Wakame-vdc is an IaaS cloud computing framework, facilitating the provisioning and management of a heterogeneous virtualized infrastructure. Wakame-vdc virtualizes the entire data center; servers, storage, and networking. Wakame-vdc is managed via a native Web Interface, the Wakame-vdc CLI, or the powerful Wakame-vdc API.

Nimbula was a computer software company that existed from 2008 to 2017. It developed software for the implementation of public and private cloud computing environments.

QVD is an open-source virtual desktop infrastructure (VDI) product built on Linux. Its main purpose is to provide remote desktops to users.

<span class="mw-page-title-main">Ganeti</span> Virtual machine cluster management tool

Ganeti is a virtual machine cluster management tool originally developed by Google. The solution stack uses either Xen, KVM, or LXC as the virtualization platform, LVM for disk management, and optionally DRBD for disk replication across physical hosts or shared storage for external replication. Since 2007 Ganeti is developed and released as free and open-source software. Originally subject to the requirements of the GNU General Public License (GPL) version 2, the license was changed to the 2-clause BSD license in version 2.11.6, released September 2014.

CloudStack is open-source Infrastructure-as-a-Service cloud computing software for creating, managing, and deploying infrastructure cloud services. It uses existing hypervisor platforms for virtualization, such as KVM, VMware vSphere, including ESXi and vCenter, XenServer/XCP and XCP-ng. In addition to its own API, CloudStack also supports the Amazon Web Services (AWS) API and the Open Cloud Computing Interface from the Open Grid Forum.

<span class="mw-page-title-main">Abiquo Enterprise Edition</span>

Abiquo Hybrid Cloud Management Platform is a web-based cloud computing software platform developed by Abiquo. Written entirely in Java, it is used to build, integrate and manage public and private clouds in homogeneous environments. Users can deploy and manage servers, storage system and network and virtual devices. It also supports LDAP integration.

openQRM is a free and open-source cloud-computing management platform for managing heterogeneous data centre infrastructures.

<span class="mw-page-title-main">Proxmox Virtual Environment</span> Linux distribution for server virtualization

Proxmox Virtual Environment is a virtualization platform designed for the provisioning of hyper-converged infrastructure.

<span class="mw-page-title-main">Dell Technologies PowerFlex</span> Software-defined storage product

Dell Technologies PowerFlex, is a commercial software-defined storage product from Dell Technologies that creates a server-based storage area network (SAN) from local server storage using x86 servers. It converts this direct-attached storage into shared block storage that runs over an IP-based network.

Harvester is a cloud native hyper-converged infrastructure (HCI) open source software. Harvester was announced in 2020 by SUSE.

References

  1. OpenNebula's Release Schedule
  2. https://firecracker-microvm.github.io
  3. "Introducing OpenNebula Enterprise Edition". OpenNebula website. 4 June 2020. Retrieved 16 June 2020.
  4. "Get Migration Packages". OpenNebula website. Retrieved 7 July 2020.
  5. "Upgrade Your OpenNebula Cloud". OpenNebula website. Retrieved 7 July 2020.
  6. "Key Features about OpenNebula". Discover OpenNebula. Retrieved 10 December 2019.
  7. R. Moreno-Vozmediano, R. S. Montero, and I. M. Llorente. "Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications", Transactions on Parallel and Distributed Systems. Special Issue on Many Task Computing (in press, doi : 10.1109/TPDS.2010.186)
  8. R. S. Montero, R. Moreno-Vozmediano, and I. M. Llorente. "An Elasticity Model for High Throughput Computing Clusters", J. Parallel and Distributed Computing (in press, doi:10.1016/j.jpdc.2010.05.005)
  9. "The Future of Cloud Computing" (PDF). European Commission Expert Group Report. 25 January 2010. Retrieved 12 December 2017.
  10. B. Sotomayor, R. S. Montero, I. M. Llorente, I. Foster. "Virtual Infrastructure Management in Private and Hybrid Clouds", IEEE Internet Computing, vol. 13, no. 5, pp. 14-22, September/October 2009. doi:10.1109/MIC.2009.119)