Intelligent workload management

Last updated

Intelligent workload management (IWM) is a paradigm for IT systems management arising from the intersection of dynamic infrastructure, virtualization, identity management, and the discipline of software appliance development. [1] IWM enables the management and optimization of computing resources in a secure and compliant manner across physical, virtual and cloud environments to deliver business services for end customers.

Contents

The IWM paradigm builds on the traditional concept of workload management whereby processing resources are dynamically assigned to tasks, or "workloads," based on criteria such as business process priorities (for example, in balancing business intelligence queries against online transaction processing [2] ), resource availability, security protocols, or event scheduling, but extends the concept into the structure of individual workloads themselves.

Definition of "workload"

In the context of IT systems and data center management, a "workload" can be broadly defined as "the total requests made by users and applications of a system." [3] However, it is also possible to break down the entire workload of a given system into sets of self-contained units. Such a self-contained unit constitutes a "workload" in the narrow sense: an integrated stack consisting of application, middleware, database, and operating system devoted to a specific computing task. Typically, a workload is "platform agnostic," meaning that it can run in physical, virtual or cloud computing environments. Finally, a collection of related workloads which allow end users to complete a specific set of business tasks can be defined as a "business service." [4]

Making workloads "intelligent"

A workload is considered "intelligent" when it a) understands its security protocols and processing requirements so it can self-determine whether it can deploy in the public cloud, the private cloud or only on physical machines; b) recognizes when it is at capacity and can find alternative computing capacity as required to optimize performance; c) carries identity and access controls as well as log management and compliance reporting capabilities with it as it moves across environments; and d) is fully integrated with the business service management layer, ensuring that end user computing requirements are not disrupted by distributed computing resources, and working with current and emergent IT management frameworks.

Intelligent workloads and security in the cloud

The deployment of individual workloads and workload-based business services in the "hybrid distributed data center," [5] - including physical machines, data centers, private clouds, and the public cloud - raises a host of issues for the efficient management of provisioning, security, and compliance. By making workloads "intelligent" so that they can effectively manage themselves in terms of where they run, how they run, and who can access them, intelligent workload management addresses these issues in a way that is efficient, flexible, and scalable. The 1989 seminal work by D.F. Ferguson, Y. Yemini, and C. Nikolaou "Microeconomic Algorithms for Load Balancing in Distributed Computing Systems" developed a theory by which workloads could be made "intelligent" to manage themselves. [6] This theory has since been patented and was commercialized by the Boston-based company, VMTurbo, in 2009.

See also

Related Research Articles

Client–server model Distributed application structure in computing

Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client does not share any of its resources, but it requests content or service from a server. Clients therefore initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.

Load balancing (computing) set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process.

Scalability property of a system to handle a growing amount of work by adding resources to the system

Scalability is the property of a system to handle a growing amount of work by adding resources to the system.

Parallel rendering is the application of parallel programming to the computational domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Recent research has also suggested that parallel rendering can be applied to mobile gaming to decrease power consumption and increase graphical fidelity. Rendering is an embarrassingly parallel workload in multiple domains and thus has been the subject of much research.

Workforce management (WFM) is an institutional process that maximizes performance levels and competency for an organization. The process includes all the activities needed to maintain a productive workforce, such as field service management, human resource management, performance and training management, data collection, recruiting, budgeting, forecasting, scheduling and analytics.

The term "software multitenancy" refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants. Systems designed in such manner are often called shared. A tenant is a group of users who share a common access with specific privileges to the software instance. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance - including its data, configuration, user management, tenant individual functionality and non-functional properties. Multitenancy contrasts with multi-instance architectures, where separate software instances operate on behalf of different tenants.

A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

Cloud computing Form of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

Cloud testing is a form of software testing in which web applications use cloud computing environments to simulate real-world user traffic.

Eucalyptus is a paid and open-source computer software for building Amazon Web Services (AWS)-compatible private and hybrid cloud computing environments, originally developed by the company Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network resources that can be dynamically scaled up or down as application workloads change. Mårten Mickos was the CEO of Eucalyptus. In September 2014, Eucalyptus was acquired by Hewlett-Packard and then maintained by DXC Technology. After DXC stopped developing the product in late 2017, AppScale Systems forked the code and started supporting Eucalyptus customers.

OpenNebula cloud computing platform for managing heterogeneous distributed data center infrastructures

OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service. The two primary uses of the OpenNebula platform are data center virtualization solutions and cloud infrastructure solutions. The platform is also capable of offering the cloud infrastructure necessary to operate a cloud on top of existing infrastructure management solutions. OpenNebula is free and open-source software, subject to the requirements of the Apache License version 2.

HP Cloud Service Automation is cloud management software from Hewlett-Packard that is used by companies and government agencies to automate the management of cloud-based IT-as-a-service, from order, to provision, and retirement. HP Cloud Service Automation orchestrates the provisioning and deployment of complex IT services such as of databases, middleware, and packaged applications. The software speeds deployment of application-based services across hybrid cloud delivery platforms and traditional IT environments.

IBM cloud computing is a set of cloud computing services for business offered by the information technology company IBM. IBM cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models, in addition to the components that make up those clouds.

Converged storage is a storage architecture that combines storage and computing resources into a single entity. This can result in the development of platforms for server centric, storage centric or hybrid workloads where applications and data come together to improve application performance and delivery. The combination of storage and compute differs to the traditional IT model in which computation and storage take place in separate or siloed computer equipment. The traditional model requires discrete provisioning changes, such as upgrades and planned migrations, in the face of server load changes, which are increasingly dynamic with virtualization, where converged storage increases the supply of resources along with new VM demands in parallel.

HP ConvergedSystem is a portfolio of system-based products from Hewlett-Packard (HP) that integrates preconfigured IT components into systems for virtualization, cloud computing, big data, collaboration, converged management, and client virtualization. Composed of servers, storage, networking, and integrated software and services, the systems are designed to address the cost and complexity of data center operations and maintenance by pulling the IT components together into a single resource pool so they are easier to manage and faster to deploy. Where previously it would take three to six months from the time of order to get a system up and running, it now reportedly takes as few as 20 days with the HP ConvergedSystem.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

Cloud load balancing is a type of load balancing that is performed in cloud computing. Cloud load balancing is the process of distributing workloads across multiple computing resources. Cloud load balancing reduces costs associated with document management systems and maximizes availability of resources. It is a type of load balancing and not to be confused with Domain Name System (DNS) load balancing. While DNS load balancing uses software or hardware to perform the function, cloud load balancing uses services offered by various computer network companies.

Cloud management is the management of cloud computing products and services.

Xangati

Xangati, Inc is an American-based private company, that provides service assurance analytics software for enterprises and service providers, operating in virtualized data centers and hybrid cloud environments.

References

  1. Thomas Mendel (October 26, 2009). "IT Management Software Market Update". Forrester. Archived from the original on December 25, 2009. Retrieved 2017-01-10. ...a particularly exciting new category, which we tentatively call process and workload automation, some vendors refer to it as intelligent workload management.
  2. "Dynamic workload management for very large data warehouses: juggling feathers and bowling balls". VLDB Endowment. 2007. Retrieved 2008-11-12.
  3. "What Is Your Definition of Database Workload?". Database Journal. January 8, 2009. Retrieved 2009-11-15.
  4. "IT Services, Business Services, Services...what's next?". HP ITIL v3 Community Blog. March 3, 2008. Retrieved 2009-11-15.
  5. "The Hybrid Distributed Data Center -er- Cloud?". Sun Microsystmes. October 1, 2009. Archived from the original on October 4, 2009. Retrieved 2009-11-15.
  6. Ferguson, D.F.; Yemini, Y.; Nikolaous, C. (1988). "Microeconomic algorithms for load balancing in distributed computer systems". Microeconomic Algorithms for Load Balancing in Distributed Computing Systems. Washington, D.C.: IEEE Computer Society Press. pp. 491–499. doi:10.1109/DCS.1988.12552. ISBN   0-8186-0865-X.