HP Performance Optimized Datacenter

Last updated
HP Performance Optimized Datacenter (POD) 240a, 40c, 20c
PerformanceOptimizedDatacenter.JPG
Developer Hewlett-Packard (HP)
Release dateHP POD 40c launched in 2008; HP POD 20c launched in 2010; HP 240a launched in 2011
DimensionsHP POD 40c uses 40-foot modules; HP POD 20c uses 20-foot modules; HP 240a uses multiple 40-foot modules
Website HP Performance Optimized Data Center page
Additional specifications
Compute capacityCan support over 7,000 compute nodes or 24,000 large-form-factor (LFF) hard drives, up to 1.3 megawatts up to 30 kW per rack
Power Usage Effectiveness (PUE) ratingPOD 20c and 40c: 1.25; POD 240a: 1.05 to 1.4, depending on IT load and location
Cooling methodPOD 20c and 40c: water cooled; POD 240a: HP Adaptive Cooling Technology (air cooled)

The HP Performance Optimized Datacenter (POD) is a range of three modular data centers manufactured by HP.

Contents

Housed in purpose-built modules of standard shipping container form-factor of either 20 feet or 40 feet in length the data centers are shipped preconfigured with racks, cabling and equipment for power and cooling. [1] They can support technologies from HP or third parties. [2] The claimed capacity is the equivalent of up to 10,000 square feet of typical data center capacity depending on the model. [3] Depending on the model, they use either chilled water cooling [4] or a combination of direct expansion air cooling. [5]

HP POD 20c and 40c

A 40-foot HP POD. PerformanceOptimizedDatacenter2.JPG
A 40-foot HP POD.

The POD 40c was launched in 2008. This 40-foot modular data center has a maximum power capacity up to 27 kW per rack. The POD 40c supports 3,500 compute nodes or 12,000 LFF hard drives. HP has claimed this offers the computing equivalent of 4,000 square foot of traditional data center space. [4]

The POD 20c was launched in 2010. This modular data center is housed in a 20-foot container. This version houses 10 industry-standard 50U racks of hardware. The POD uses an efficient cooling design of variable speed fans, hot and cold aisle containment and close coupled cooling to maximize capacity and efficiency. The POD 20c can operate at a Power Usage Effectiveness of 1.25. [6] PODs can maintain cold aisle temperatures higher than typical brick and mortar data centers. [7] The temperature of the cold aisle in traditional data centers is typically 68 to 72 degrees, whereas the POD can efficiently operate at temperatures in this range up to 90 degrees. [6]

Both the 20c and the 40c are water-cooled. The benefit of water cooling is higher capacity and less power usage than traditional air-cooled systems. [8]

HP POD 240a

The HP POD 240a was launched in June 2011. [9] It can be configured with two rows of 44 extra height 50U racks that could house 4,400 server nodes of typical size, or 7,040 server nodes of the densest size. [10] HP claimed that the typical brick and mortar data center required to house this equipment would be 10,000 square feet of floor space. [10]

HP claims "near-perfect" Energy efficiency for the POD 240a, which it nicknames the "EcoPOD". [11] HP says it has recorded estimated Power Usage Effectiveness (PUE) ratios of 1.05 to 1.4, depending on IT load and location. [5] The perfect efficiency would be a Power usage effectiveness (PUE) of 1.0. [11]

The POD 240a has a refrigerant-based air cooled HVAC system with air-side economization [3] When the ambient air conditions are cool enough, the 240a uses economizer or free air mode—where outside air can be taken in and circulated inside the modular data center to cool the IT equipment. [5] [12] [13]

Customers

In September 2013, eBay announced that they were "deploying the world’s largest modular data center, with 44 rack positions and 1.4Megawatts of power" using HP EcoPODs. [14]

See also

Related Research Articles

A colocation center or "carrier hotel", is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms and also connect them to a variety of telecommunications and network service providers with a minimum of cost and complexity.

<span class="mw-page-title-main">Server farm</span> Collection of computer servers

A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.

<span class="mw-page-title-main">Data center</span> Building or room used to house computer servers and related equipment

A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.

<span class="mw-page-title-main">Blade server</span> Server computer that uses less energy and space than a conventional server

A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

<span class="mw-page-title-main">Sun Modular Datacenter</span> Portable data center built into a 20-foot shipping container

Sun Modular Datacenter is a portable data center built into a standard 20-foot intermodal container manufactured and marketed by Sun Microsystems. An external chiller and power were required for the operation of a Sun MD. A data center of up to 280 servers could be rapidly deployed by shipping the container in a regular way to locations that might not be suitable for a building or another structure, and connecting it to the required infrastructure. Sun stated that the system could be made operational for 1% of the cost of building a traditional data center.

<span class="mw-page-title-main">Server room</span> Room containing computer servers

A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.

Steele is a supercomputer that was installed at Purdue University on May 5, 2008. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Coates built in 2009, Rossmann built in 2010, and Hansen and Carter built in 2011. Steele was the largest campus supercomputer in the Big Ten outside a national center when built. It ranked 104th on the November 2008 TOP500 Supercomputer Sites list.

The Cisco Nexus series switches are modular and fixed port network switches designed for the data center. Cisco Systems introduced the Nexus Series of switches on January 28, 2008. The first chassis in the Nexus 7000 family is a 10-slot chassis with two supervisor engine slots and eight I/O module slots at the front, as well as five crossbar switch fabric modules at the rear. Beside the Nexus 7000 there are also other models in the Nexus range.

Liebert Corporation, a business of Vertiv, is a global manufacturer of power, precision cooling and infrastructure management systems for mainframe computer, server racks, and critical process systems. Liebert is headquartered in Westerville, Ohio, and employs more than 1,800 people across 12 manufacturing plants worldwide.

A point of delivery, or PoD, is "a module of network, compute, storage, and application components that work together to deliver networking services. The PoD is a repeatable design pattern, and its components maximize the modularity, scalability, and manageability of data centers."

Power usage effectiveness (PUE) is a ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment.

<span class="mw-page-title-main">Modular data center</span> Type of data centre

A modular data center system is a portable method of deploying data center capacity. A modular data center can be placed anywhere data capacity is needed.

Amplidata is a privately-held cloud storage technology provider based in Lochristi, Belgium. In November 2010, Amplidata opened its U.S. headquarters in Redwood City, California. The research and development department has locations in Belgium and Egypt, while the sales and support departments are represented in a number of countries in Europe and North America.

HP Flexible Data Center, also termed FlexDC, is a modular data center built from prefabricated components by Hewlett-Packard and introduced in 2010. It is housed in five large buildings that form the shape of a butterfly. The Flexible DC looks like a traditional building, but it is fabricated off-site in order to circumvent the two years it often takes for traditional building construction. The building consists of a central admin area, surrounded by 1-4 data halls. FDC offers cooling options that are optimal for each type of climate.

Energy Logic is a vendor-neutral approach to achieving energy efficiency in data centers. Developed and initially released in 2007, the Energy Logic efficiency model suggests ten holistic actions – encompassing IT equipment as well as traditional data center infrastructure – guided by the principles dictated by the "Cascade Effect."

<span class="mw-page-title-main">Simon Rohrich</span> American inventor and entrepreneur

Simon Rohrich is an American inventor and entrepreneur who is the co-founder and Chief Technology Evangelist of Elliptical Mobile Solutions, a provider of mobile micro-modular data centers. He defined the specific technology type of his data centers in a 2010 white paper, coining the term micro modular data center (MMDC) and creating the specific technology. This technology has been used by numerous corporations and governmental entities. Rohrich is also known for his participation in the Armored Combat League (ACL) along with the Society for Creative Anachronism's armoured combat tournaments and events.

<span class="mw-page-title-main">Immersion cooling</span> IT cooling practice

Immersion cooling is an IT cooling practice by which complete servers are immersed in a dielectric, electrically non-conductive fluid that has significantly higher thermal conductivity than air. Heat is removed from a system by putting the coolant in direct contact with hot components, and circulating the heated liquid through heat exchangers. This practice is highly effective because liquid coolants can absorb more heat from the system, and are more easily circulated through the system, than air. Immersion cooling has many benefits, including but not limited to: sustainability, performance, reliability and cost

<span class="mw-page-title-main">Close Coupled Cooling</span>

Close Coupled Cooling is a last generation cooling system particularly used in data centers. The goal of close coupled cooling is to bring heat transfer closest to its source: the equipment rack. By moving the air conditioner closer to the equipment rack a more precise delivery of inlet air and a more immediate capture of exhaust air is ensured.

A Wireless Data center is a type of data center that uses wireless communication technology instead of cables to store, process and retrieve data for enterprises. The development of Wireless Data centers arose as a solution to growing cabling complexity and hotspots. The wireless technology was introduced by Shin et al., who replaced all cables with 60 GHz wireless connections at the Cayley data center.

<span class="mw-page-title-main">Green data center</span> Server facility which utilizes energy-efficient technologies

A green data center, or sustainable data center, is a service facility which utilizes energy-efficient technologies. They do not contain obsolete systems, and take advantage of newer, more efficient technologies.

References

  1. Barrett, Alex, "Purdue chooses containerized data center over colocation," SearchDataCenter.com, July 2010
  2. Miller, Rich, "CRG West Customizes for HP Containers," DataCenterKnowledge, March 2009
  3. 1 2 Baker, Jim, “HP Data Center Infrastructure in a POD,” The Clipper Group Navigator, August 18, 2011
  4. 1 2 Miller, Rich “A Look Inside HP’s POD Container,” DataCenterKnowledge, July 2008
  5. 1 2 3 Miller, Rich, “HP Unveils Updated EcoPOD Modular Design,” DataCenterKnowledge, June 7, 2011
  6. 1 2 Miller, Rich, “HP Offers 20-Foot Version of POD Container,” DataCenterKnowledge, February 3, 2010
  7. Miller, Rich, “The HP POD: 90 Degrees in the Cold Aisle,” DataCenterKnowledge, December 2008
  8. "Lemon, Sumner, "IBM eyes expanded water cooling for data centers," NetworkWorld, February 5, 2007". Archived from the original on April 15, 2010. Retrieved November 15, 2011.
  9. "Enderle, Rob, "HP's EcoPod: Beginning of the End for Traditional Data Centers," ITBusinessEdge, June 7, 2011". Archived from the original on August 26, 2011. Retrieved November 15, 2011.
  10. 1 2 Prickett Morgan, Timothy, “HP welds data center containers into EcoPODs,” The Register, June 6, 2011
  11. 1 2 Wheeland, Matthew, “HP Unveils 'Eco-POD' Data Center, Touts 95% Energy Savings,” GreenerComputingNews, June 2011
  12. Clark, Jack, “HP to announce free-cooled datacentre Pod in June,” ZDNet, May 2011
  13. McNevin, Ambrose "An in-depth look at HP's EcoPOD," DacenterDynamics, June 15, 2011.
  14. "Our News - eBay Inc".