Virtual machine lifecycle management

Last updated

Virtual machine lifecycle management is the class of management that looks at the life cycle of a virtual machine from the viewpoint of the application vs one focused on roles within an organization. A number of major software vendors, including Microsoft and Novell, have begun to release software products aiming at simplifying the administration of larger virtual machine deployments. [1]

Contents

Environmental characteristics

Virtualized environments are fundamentally different from physical environments in architecture and capabilities. The flexibility they provide is derived from three fundamental characteristics:

  1. Time: Over time, the topology of the environment changes with machines coming online and others going offline.
  2. Motion: Unlike physical servers, virtual machines easily relocate around the data-center.
  3. Transparency: With no physical presence, virtual machines cannot be seen, identified, touched or often, missed.

Effects

These characteristics come together to define all the benefits of virtualization, from cost-savings to disaster recovery. However, they also change the nature of management of the infrastructure itself. The emerging space of Virtual Machine Lifecycle Management is the result of the time, motion and transparency qualities of virtual environments. This need cuts across software development and operations, encompassing all segments of the ITIL framework:

  1. Service Strategy – As virtualization extends from a transparent back-end alternative to a full infrastructure offering within the organization, Virtual Machine Lifecycle Management provides the granular controls to enable wholly new delivery models, from short-term provisioning to outsourced virtual machine hosting.
  2. Service Design – When designing the virtual infrastructure services, administrators consider both the structure of the individual virtual machine given to the customer as well as the interactions between all of the virtual machines in the environment, as they come online, move, and expire
  3. Service Transition – Virtual Machine Lifecycle Management augments the traditional set of requirements built into delivering an infrastructure component to the business. Best practices and specific tools can be used to create the right controls within each virtual machine, ensuring the behavior of all the machines is in line with the design.
  4. Service Operation – Once operational, virtual environments are extraordinarily dynamic, by design. Above and beyond the complexity of a traditional operating environment, management needs can be minimized with strong controls set in the transition phase and ongoing monitoring and alerting specifically designed to address the unique characteristics of the virtual infrastructure.
  5. Continual Service Improvement – As virtual environments mature and grow, internal customers and management will be keen to understand the savings and benefits of the paradigm, security groups will increasingly audit the infrastructure, and new chargeback methods will emerge to account for the new model. Virtual Machine Lifecycle Management tools, with their innate understanding of the environment and its transient and mobile nature, will deliver the metrics needed to demonstrate success to all the constituents.

Related Research Articles

In software engineering, software configuration management is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the "what, when, why and who" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts.

<span class="mw-page-title-main">Product lifecycle</span> Duration of processing of products from inception, to engineering, design & manufacture

In industry, Product Lifecycle Management (PLM) is the process of managing the entire lifecycle of a product from its inception through the engineering, design and manufacture, as well as the service and disposal of manufactured products. PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprises.

Software multitenancy is a software architecture in which a single instance of software runs on a server and serves multiple tenants. Systems designed in such manner are "shared". A tenant is a group of users who share a common access with specific privileges to the software instance. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance - including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants.

In computer science, storage virtualization is "the process of presenting a logical view of the physical storage resources to" a host computer system, "treating all storage media in the enterprise as a single pool of storage."

Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.

Process development execution systems (PDES) are software systems used to guide the development of high-tech manufacturing technologies like semiconductor manufacturing, MEMS manufacturing, photovoltaics manufacturing, biomedical devices or nanoparticle manufacturing. Software systems of this kind have similarities to product lifecycle management (PLM) systems. They guide the development of new or improved technologies from its conception, through development and into manufacturing. Furthermore they borrow on concepts of manufacturing execution systems (MES) systems but tailor them for R&D rather than for production. PDES integrate people, data, information, knowledge and business processes.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a "pay as you go" model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Kaavo is a cloud computing management company. Kaavo was founded in November 2007 in the U.S. Kaavo pioneered top-down application-centric management of cloud infrastructure across public, private, and hybrid clouds.

<span class="mw-page-title-main">Definitive Media Library</span>

A Definitive Media Library is a secure Information Technology repository in which an organisation's definitive, authorised versions of software media are stored and protected. Before an organisation releases any new or changed application software into its operational environment, any such software should be fully tested and quality assured. The Definitive Media Library provides the storage area for software objects ready for deployment and should only contain master copies of controlled software media configuration items (CIs) that have passed appropriate quality assurance checks, typically including both procured and bespoke application and gold build source code and executables. In the context of the ITIL best practice framework, the term Definitive Media Library supersedes the term definitive software library referred to prior to version ITIL v3.

HP Cloud Service Automation is cloud management software from Hewlett Packard Enterprise (HPE) that is used by companies and government agencies to automate the management of cloud-based IT-as-a-service, from order, to provision, and retirement. HP Cloud Service Automation orchestrates the provisioning and deployment of complex IT services such as of databases, middleware, and packaged applications. The software speeds deployment of application-based services across hybrid cloud delivery platforms and traditional IT environments.

HCL BigFix is an endpoint management platform enabling IT operations and security teams to automate the discovery, management, and remediation in on-premise, virtual, or cloud endpoints. HCL BigFix automates the management, patching and inventory of nearly 100 operating system versions.

HP Business Service Automation was a collection of software products for data center automation from the HP Software Division of Hewlett-Packard Company. The products could help Information Technology departments create a common, enterprise-wide view of each business service; enable the automation of change and compliance across all devices that make up a business service; connect IT processes and coordinate teams via common workflows; and integrate with monitoring and ticketing tools to form a complete, integrated business service management solution. HP now provides many of these capabilities as part of HP Business Service Management software and solutions.

<span class="mw-page-title-main">IBM cloud computing</span> Cloud Computing

IBM cloud computing is a set of cloud computing services for business offered by the information technology company IBM. IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models, in addition to the components that make up those clouds.

HP ConvergedSystem is a portfolio of system-based products from Hewlett-Packard (HP) that integrates preconfigured IT components into systems for virtualization, cloud computing, big data, collaboration, converged management, and client virtualization. Composed of servers, storage, networking, and integrated software and services, the systems are designed to address the cost and complexity of data center operations and maintenance by pulling the IT components together into a single resource pool so they are easier to manage and faster to deploy. Where previously it would take three to six months from the time of order to get a system up and running, it now reportedly takes as few as 20 days with the HP ConvergedSystem.

HP CloudSystem is a cloud infrastructure from Hewlett Packard Enterprise (HPE) that combines storage, servers, networking and software.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

<span class="mw-page-title-main">HP Cloud</span> Set of cloud computing services

HP Cloud was a set of cloud computing services available from Hewlett-Packard that offered public cloud, private cloud, hybrid cloud, managed private cloud and other cloud services. It was the combination of the previous HP Converged Cloud business unit and HP Cloud Services, an OpenStack-based public cloud. It was marketed to enterprise organizations to combine public cloud services with internal IT resources to create hybrid clouds, or a mix of private and public cloud environments, from around 2011 until 2016.

Network functions virtualization (NFV) is a network architecture concept that leverages the IT virtualization technologies to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create and deliver communication services.

Cloud management is the management of cloud computing products and services.

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources. The definitions may be in a version control system. The code in the definition files may use either scripts or declarative definitions, rather than maintaining the code through manual processes, but IaC more often employs declarative approaches.

References

  1. Getting Down to Business With Virtual Machine Lifecycle Management, Paul Rubens, serverwatch.com, 13 September 2007