Dynamic infrastructure

Last updated

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned (made available) to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

Contents

Concept

The basic premise of dynamic infrastructures is to leverage pooled IT resources to provide flexible IT capacity, enabling the allocation of resources in line with demand from business processes. This is achieved by using server virtualization technology to pool computing resources wherever possible, and allocating these resources on-demand using automated tools. This allows for load balancing and is a more efficient approach than keeping massive computing resources in reserve to run tasks that take place, for example, once a month, but are otherwise under-utilized.

Dynamic Infrastructures may also be used to provide security and data protection when workloads are moved during migrations, provisioning, [1] enhancing performance or building co-location facilities. [2]

Dynamic infrastructures were promoted to enhance performance, scalability, [3] system availability and uptime, increasing server utilization and the ability to perform routine maintenance on either physical or virtual systems all while minimizing interruption to business operations and reducing cost for IT. Dynamic infrastructures also provide the fundamental business continuity and high availability requirements to facilitate cloud or grid computing.

For networking companies, infrastructure 2.0 refers to the ability of networks to keep up with the movement and scale requirements of new enterprise IT initiatives, especially virtualization and cloud computing. According to companies like Cisco, F5 Networks and Infoblox, network automation and connectivity intelligence between networks, applications and endpoints will be required to reap the full benefits of virtualization and many types of cloud computing. This will require network management and infrastructure to be consolidated, enabling higher levels of dynamic control and connectivity between networks, systems and endpoints.[ citation needed ]

Early examples of server-level dynamic infrastructures are the FlexFrame for SAP and FlexFrame for Oracle introduced by Fujitsu Siemens Computers (now Fujitsu) in 2003. The FlexFrame approach was to dynamically assign servers to applications on demand, leveling peaks and enabling organizations to maximize the benefit from their IT investments. [4]

Benefits

Dynamic infrastructures take advantage of intelligence gained across the network. By design, every dynamic infrastructure is service-oriented and focused on supporting and enabling the end users in a highly responsive way. It can utilize alternative sourcing approaches, like cloud computing to deliver new services with agility and speed.

Global organizations already have the foundation for a dynamic infrastructure that will bring together the business and IT infrastructure to create new possibilities. For example:

Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model. – Source: Gartner – "TCO of Traditional Software Distribution vs. Application Virtualization" / Michael A Silver, Terrence Cosgrove, Mark A Margevicious, Brian Gammage / 16 April 2008

While green issues are a primary driver in 10% of current data center outsourcing and hosting initiatives, cost reductions initiatives are a driver 47% of the time and are now aligned well with green goals. Combining the two means that at least 57% of data center outsourcing and hosting initiatives are driven by green. – Source: Gartner – "Green IT Services as a Catalyst for Cost Optimization." / Kurt Potter / 4 December 2008

"By 2013, more than 50% of midsize organizations and more than 75% of large enterprises will implement layered recovery architectures." – Source: Gartner – "Predicts 2009: Business Continuity Management Juggles Standardization, Cost and Outsourcing Risk"). / Roberta J Witty, John P Morency, Dave Russell, Donna Scott, Rober Desisto / 28 January 2009

Vendors

Some vendors promoting dynamic infrastructures include IBM, [5] [6] Microsoft, [7] Sun, [8] Fujitsu, [9] HP, [10] Dell, [11] .

See also

Related Research Articles

In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability.

<span class="mw-page-title-main">Data center</span> Building or room used to house computer servers and related equipment

A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.

NetApp, Inc. is an American data storage and data management services company headquartered in San Jose, California. It has ranked in the Fortune 500 from 2012 to 2021. Founded in 1992 with an initial public offering in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.

Infrastructure as a service (IaaS) is a cloud computing service model by means of which computing resources are supplied by a cloud services provider. The IaaS vendor provides the storage, network, servers, and virtualization (which mostly refers, in this case, to emulating computer hardware). This service enables users to free themselves from maintaining an on-premises data center. The IaaS provider is hosting these resources in either the public cloud (meaning users share the same hardware, storage, and network devices with other users), the private cloud (meaning users do not share these resources), or the hybrid cloud (combination of both).

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

<span class="mw-page-title-main">Virtual private cloud</span> Pool of shared resources allocated within a public cloud environment

A virtual private cloud (VPC) is an on-demand configurable pool of shared resources allocated within a public cloud environment, providing a certain level of isolation between the different organizations (denoted as users hereafter) using the resources. The isolation between one VPC user and all other users of the same cloud (other VPC users as well as other public cloud users) is achieved normally through allocation of a private IP subnet and a virtual communication construct (such as a VLAN or a set of encrypted communication channels) per user. In a VPC, the previously described mechanism, providing isolation within the cloud, is accompanied with a virtual private network (VPN) function (again, allocated per VPC user) that secures, by means of authentication and encryption, the remote access of the organization to its VPC resources. With the introduction of the described isolation levels, an organization using this service is in effect working on a 'virtually private' cloud (that is, as if the cloud infrastructure is not shared with other users), and hence the name VPC.

Kaavo is a cloud computing management company. Kaavo was founded in November 2007 in the U.S. Kaavo pioneered top-down application-centric management of cloud infrastructure across public, private, and hybrid clouds.

<span class="mw-page-title-main">Converged infrastructure</span>

Converged infrastructure is a way of structuring an information technology (IT) system which groups multiple components into a single optimized computing package. Components of a converged infrastructure may include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.

HP Cloud Service Automation is cloud management software from Hewlett Packard Enterprise (HPE) that is used by companies and government agencies to automate the management of cloud-based IT-as-a-service, from order, to provision, and retirement. HP Cloud Service Automation orchestrates the provisioning and deployment of complex IT services such as of databases, middleware, and packaged applications. The software speeds deployment of application-based services across hybrid cloud delivery platforms and traditional IT environments.

Hewlett Packard Enterprise and its predecessor entities have a long history of developing and selling networking products. Today it offers campus and small business networking products through its wholly owned company Aruba Networks which was acquired in 2015. Prior to this, HP Networking was the entity within HP offering networking products.

<span class="mw-page-title-main">FUJITSU Cloud IaaS Trusted Public S5</span> Cloud computing platform

FUJITSU Cloud IaaS Trusted Public S5 is a Fujitsu cloud computing platform that aims to deliver standardized enterprise-class public cloud services globally. It offers Infrastructure-as-a-Service (IaaS) from Fujitsu's data centres to provide computing resources that can be employed on-demand and suited to customers' needs.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

Network as a service (NaaS) brings software-defined networking (SDN), programmable networking and API-based operation to WAN services, transport, hybrid cloud, multicloud, Private Network Interconnect, and internet exchange points.

<span class="mw-page-title-main">Internet area network</span> Type of large-scale computer network

An Internet area network (IAN) is a concept for a communications network that connects voice and data endpoints within a cloud environment over IP, replacing an existing local area network (LAN), wide area network (WAN) or the public switched telephone network (PSTN).

Cloud management is the management of cloud computing products and services.

An elastic cloud is a cloud computing offering that provides variable service levels based on changing needs.

<span class="mw-page-title-main">Hyper-converged infrastructure</span> Software infrastructure system

Hyper-converged infrastructure (HCI) is a software-defined IT infrastructure that virtualizes all of the elements of conventional "hardware-defined" systems. HCI includes, at a minimum, virtualized computing, software-defined storage, and virtualized networking. HCI typically runs on commercial off-the-shelf (COTS) servers.

Data center management is the collection of tasks performed by those responsible for managing ongoing operation of a data center. This includes Business service management and planning for the future.

Composable disaggregated infrastructure (CDI), sometimes stylized as composable/disaggregated infrastructure, is a technology that allows enterprise data center operators to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. It is considered a class of converged infrastructure, and uses management software to combine compute, storage and network elements. It is similar to public cloud, except the equipment sits on premises in an enterprise data center.

References

  1. "Computation on Demand: The Promise of Dynamic Provisioning". 10 December 2007.
  2. An overview of continuous data protection.
  3. Amazon Elastic Compute Cloud.
  4. "IDC White Paper Building the Dynamic DataCenter: FlexFrame for SAP" (PDF). Archived from the original (PDF) on 2017-11-07. Retrieved 2017-10-30.
  5. IBM patent: Method For Dynamic Information Technology Infrastructure Provisioning
  6. IBM's dynamic infrastructure taking shape at TheRegister
  7. Microsoft's view of The Dynamic Datacenter covered by networkworld.
  8. Dynamic Infrastructure at Sun
  9. Fujitsu Dynamic Infrastructures
  10. Dynamic Infrastructure and Blades at HP Archived 2011-06-14 at the Wayback Machine
  11. Dell Converged Infrastructure