Software-defined data center

Last updated

Software-defined data center (SDDC; also: virtual data center, VDC) is a marketing term that extends virtualization concepts such as abstraction, pooling, and automation to all data center resources and services to achieve IT as a service (ITaaS). [1] In a software-defined data center, "all elements of the infrastructure — networking, storage, CPU and security – are virtualized and delivered as a service." [2]

Contents

SDDC support can be claimed by a wide variety of approaches. Critics see the software-defined data center as a marketing tool and "software-defined hype," noting this variability. [3]

In 2013, analysts were divided into three different groups, [3] those who think it is "just another software-defined hype", those who think most of the components are already available and those who see a potential future market. There was no unified agreement about the direction of SDDC. For some areas like the software-defined networking a market value of about US$3.7 billion by 2016 was expected, compared to US$360 million in 2013. [3] (software-defined networking was expected to reach US$18.5 billion in 2022 [4] ) IDC estimates that the software-defined storage market is poised to expand faster than any other storage market. [3]

Description and core components

The software-defined data center encompasses a variety of concepts and data-center infrastructure components, with each component potentially provisioned, operated, and managed through an application programming interface (API). [5] Core architectural components that comprise the software-defined data center [6] include the following:

A software-defined data center differs from a private cloud, since a private cloud only has to offer virtual-machine self-service, [9] beneath which it could use traditional provisioning and management. Instead, SDDC concepts imagine a data center that can encompass private, public, and hybrid clouds. [10]

Origins and development

Data centers traditionally lacked the capacity to accommodate total virtualization. [11] By 2013, companies began laying the foundation for software-defined data centers with virtualization. [3] Ben Cherian of Midokura considered Amazon Web Services as a catalyst for the move toward software-defined data centers because it "convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility." [5]

Potential impact

In 2013, the software-defined data center term was promoted as a paradigm shift. [5] [12] According to Steve Herrod, the promise of the software-defined data center was that companies would no longer need to rely on specialized hardware or hire consultants to install and program hardware in its specialized language. [13] Rather, IT would define applications and all of the resources they require—including compute, storage, networking, security, and availability—and group all of the required components to create a “logical application.” [13]

Commonly cited benefits of software-defined data centers include improved efficiency [14] from extending virtualization throughout the data center; increased agility [15] from provisioning applications quickly; improved control [15] over application availability and security through policy-based governance; and the flexibility [14] [15] to run new and existing applications in multiple platforms and clouds.

In addition, a software-defined data center implementation could reduce a company's energy usage by enabling servers and other data center hardware to run at decreased power levels or be turned off. [15] Some believe that software-defined data centers improve security by giving organizations more control over their hosted data and security levels, compared to security provided by hosted-cloud providers. [15]

The software-defined data center was marketed to drive down prices for data center hardware and challenge traditional hardware vendors to develop new ways to differentiate their products through software and services. [16]

Challenges

The concepts of software-defined in general, and software-defined data centers in particular, have been dismissed by some as “nonsense,” “marketecture,” and “software-defined hype.” [3] Some critics believe that only a minority of companies with “completely homogeneous IT systems’” already in place, such as Yahoo! and Google, can transition to software-defined data centers. [3]

According to some observers, software-defined data centers won’t necessarily eliminate challenges that relate to handling the differences between development and production environments; managing a mix of legacy and new applications; or delivering service-level agreements (SLAs). [3]

Software-defined networking was seen as essential to the software-defined data center, but it is also considered to be the “least mature technology” required to enable the software-defined data center. [11] However, several companies, including Arista Networks, Cisco, Microsoft and VMware, market products to enable virtual networks that are provisioned, extended, and moved across existing physical networks. [11]

Several competing network virtualization standards already existed by 2012. [11] Neutron, the networking component of the open-source software OpenStack project, provides an application-level abstraction of network resources and includes an interface for configuring virtual switches. [11] [17]

The software-defined data center approach will force IT organizations to adapt. Software-defined environments require rethinking many IT processes—including automation, metering, and billing—and executing service delivery, service activation, and service assurance. [15] A widespread transition to the SDDC could take years. [6]

Related Research Articles

<span class="mw-page-title-main">VMware</span> Multi-cloud service provider for all apps

VMware, Inc. is an American cloud computing and virtualization technology company with headquarters in Palo Alto, California. VMware was the first commercially successful company to virtualize the x86 architecture.

NetApp, Inc. is an American data storage and data management services company headquartered in San Jose, California. It has ranked in the Fortune 500 from 2012 to 2021. Founded in 1992 with an initial public offering in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.

Infrastructure as a service (IaaS) is a cloud computing service model by means of which computing resources are supplied by a cloud services provider. The IaaS vendor provides the storage, network, servers, and virtualization (which mostly refers, in this case, to emulating computer hardware). This service enables users to free themselves from maintaining an on-premises data center. The IaaS provider is hosting these resources in either the public cloud (meaning users share the same hardware, storage, and network devices with other users), the private cloud (meaning users do not share these resources), or the hybrid cloud (combination of both).

In computing, virtualization or virtualisation is the act of creating a virtual version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Eucalyptus is a paid and open-source computer software for building Amazon Web Services (AWS)-compatible private and hybrid cloud computing environments, originally developed by the company Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network resources that can be dynamically scaled up or down as application workloads change. Mårten Mickos was the CEO of Eucalyptus. In September 2014, Eucalyptus was acquired by Hewlett-Packard and then maintained by DXC Technology. After DXC stopped developing the product in late 2017, AppScale Systems forked the code and started supporting Eucalyptus customers.

<span class="mw-page-title-main">Converged infrastructure</span>

Converged infrastructure is a way of structuring an information technology (IT) system which groups multiple components into a single optimized computing package. Components of a converged infrastructure may include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.

HP Cloud Service Automation is cloud management software from Hewlett Packard Enterprise (HPE) that is used by companies and government agencies to automate the management of cloud-based IT-as-a-service, from order, to provision, and retirement. HP Cloud Service Automation orchestrates the provisioning and deployment of complex IT services such as of databases, middleware, and packaged applications. The software speeds deployment of application-based services across hybrid cloud delivery platforms and traditional IT environments.

Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, in a manner more akin to cloud computing than to traditional network management. SDN is meant to address the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brain of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.

<span class="mw-page-title-main">Cloud computing architecture</span> Overview about the cloud computing architecture

Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform, back end platforms, a cloud based delivery, and a network. Combined, these components make up cloud computing architecture.

VM-aware storage (VAS) is computer data storage designed specifically for managing storage for virtual machines (VMs) within a data center. The goal is to provide storage that is simpler to use with functionality better suited for VMs compared with general-purpose storage. VM-aware storage allows storage to be managed as an integrated part of managing VMs rather than as logical unit numbers (LUNs) or volumes that are separately configured and managed.

HP ConvergedSystem is a portfolio of system-based products from Hewlett-Packard (HP) that integrates preconfigured IT components into systems for virtualization, cloud computing, big data, collaboration, converged management, and client virtualization. Composed of servers, storage, networking, and integrated software and services, the systems are designed to address the cost and complexity of data center operations and maintenance by pulling the IT components together into a single resource pool so they are easier to manage and faster to deploy. Where previously it would take three to six months from the time of order to get a system up and running, it now reportedly takes as few as 20 days with the HP ConvergedSystem.

<span class="mw-page-title-main">Virtual Computing Environment</span> American computer hardware brand

Virtual Computing Environment Company (VCE) was a division of EMC Corporation that manufactured converged infrastructure appliances for enterprise environments. Founded in 2009 under the name Acadia, it was originally a joint venture between EMC and Cisco Systems, with additional investments by Intel and EMC subsidiary VMware. EMC acquired a 90% controlling stake in VCE from Cisco in October 2014, giving it majority ownership. VCE ended in 2016 after an internal division realignment, followed by the sale of EMC to Dell.

Software-defined storage (SDS) is a marketing term for computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware. Software-defined storage typically includes a form of storage virtualization to separate the storage hardware from the software that manages it. The software enabling a software-defined storage environment may also provide policy management for features such as data deduplication, replication, thin provisioning, snapshots and backup.

<span class="mw-page-title-main">HP Cloud</span> Set of cloud computing services

HP Cloud was a set of cloud computing services available from Hewlett-Packard that offered public cloud, private cloud, hybrid cloud, managed private cloud and other cloud services. It was the combination of the previous HP Converged Cloud business unit and HP Cloud Services, an OpenStack-based public cloud. It was marketed to enterprise organizations to combine public cloud services with internal IT resources to create hybrid clouds, or a mix of private and public cloud environments, from around 2011 until 2016.

ViPR Controller is a software-defined storage offering from EMC Corporation announced on May 6, 2013, at EMC World. ViPR abstracts storage from disparate arrays into a single pool of storage capacity that "makes it easier to manage and automate its own data-storage devices and those made by competitors." ViPR became generally available September 27, 2013.

Stratoscale was a software company offering software-defined data center technology, with hyper-converged infrastructure and cloud computing capabilities. Stratoscale combined compute, storage, and networking hardware with no additional third party software. Stratoscale has shut down with no details for the future of its products.

In computer networking, a bare-metal server is a physical computer server that is used by one consumer, or tenant, only. Each server offered for rental is a distinct physical piece of hardware that is a functional server on its own. They are not virtual servers running in multiple pieces of shared hardware.

A software-defined wide area network (SD-WAN) is a wide area network that uses software-defined network technology, such as communicating over the Internet using overlay tunnels which are encrypted when destined for internal organization locations.

References

  1. Davidson, Emily A. "The Software-Defined-Data-Center (SDDC): Concept Or Reality? [VMware]". Softchoice Advisor Article. Softchoice Advisor. Archived from the original on 14 August 2013. Retrieved 28 June 2013.
  2. Rouse, Margaret. "Definition: Software Defined Datacenter". Archived from the original on 5 August 2016. Retrieved 25 February 2014.
  3. 1 2 3 4 5 6 7 8 Kovar, Joseph F. (13 May 2013). "Software-Defined Data Centers: Should You Jump On The Bandwagon?". CRN. Retrieved 10 February 2014.
  4. Markets, Research and (2023-02-27). "Software-Defined Networking Global Market to Reach $75.6 Billion by 2030: Rising BYOD Adoption to Drive Robust Growth in SDN Market". GlobeNewswire News Room (Press release). Retrieved 2023-05-02.
  5. 1 2 3 Cherian, Ben. "What Is the Software Defined Data Center and Why Is It Important?". All Things D post. All Things D. Retrieved 28 June 2013.
  6. 1 2 3 Volk, Torsten. "The Software-Defined Datacenter: Part 2 of 4 – Core Components". EMA Blogs. EMA. Retrieved 28 June 2013.
  7. "The software defined data center - part 2: compute". CohesiveFT Blog post. CohesiveFT Blog. Retrieved 28 June 2013.
  8. Marshall, David (18 March 2013). "VMware's software-defined data center will include NSX network virtualization". InfoWorld article. InfoWorld. Retrieved 28 June 2013.
  9. Donahole, Stephanie (2019-05-10). "Three Ways of Embracing the Software-Defined Data Center". Colocation America. Retrieved 2021-09-15.
  10. Otey, Michael (29 May 2013). "Moving Toward the Software-Defined Datacenter". IT Pro . Archived from the original on 5 September 2013. Retrieved 28 June 2013.{{cite news}}: |archive-date= / |archive-url= timestamp mismatch (help)
  11. 1 2 3 4 5 Knorr, Eric (13 August 2012). "What the software-defined data center really means". InfoWorld. Retrieved 28 June 2013.
  12. Paul Shread (25 July 2013). "Software-Defined Data Centers Could Change the IT Landscape". Datamation. Retrieved 22 August 2016.
  13. 1 2 Herrod, Steve. "Interop and the Software-Defined Datacenter". VMware blog post. VMware. Retrieved 28 June 2013.
  14. 1 2 Earls, Alan. "Is the software-defined data center ready for the mainstream?". SearchDataCenter article. SearchDataCenter. Archived from the original on 9 July 2018. Retrieved 28 June 2013.
  15. 1 2 3 4 5 6 Venkatraman, Archana. "Software-defined datacentres demystified". ComputerWeekly.com. Retrieved 28 June 2013.
  16. Manca, Pete (29 May 2013). "Software-Defined Data Centers: What's the Buzz All About?". Wired. Retrieved 28 June 2013.
  17. "Neutron's developer documentation". OpenStack. Retrieved 22 August 2016.