This article needs to be updated.(July 2020) |
Developer | Hewlett-Packard (HP) |
---|---|
Release date | HP POD 40c launched in 2008; HP POD 20c launched in 2010; HP 240a launched in 2011 |
Dimensions | HP POD 40c uses 40-foot modules; HP POD 20c uses 20-foot modules; HP 240a uses multiple 40-foot modules |
Website | HP Performance Optimized Data Center page |
Additional specifications | |
Compute capacity | Can support over 7,000 compute nodes or 24,000 large-form-factor (LFF) hard drives, up to 1.3 megawatts up to 30 kW per rack |
Power Usage Effectiveness (PUE) rating | POD 20c and 40c: 1.25; POD 240a: 1.05 to 1.4, depending on IT load and location |
Cooling method | POD 20c and 40c: water cooled; POD 240a: HP Adaptive Cooling Technology (air cooled) |
The HP Performance Optimized Datacenter (POD) is a range of three modular data centers manufactured by HP.
Housed in purpose-built modules of standard shipping container form-factor of either 20 feet or 40 feet in length the data centers are shipped preconfigured with racks, cabling and equipment for power and cooling. [1] They can support technologies from HP or third parties. [2] The claimed capacity is the equivalent of up to 10,000 square feet of typical data center capacity depending on the model. [3] Depending on the model, they use either chilled water cooling [4] or a combination of direct expansion air cooling. [5]
The POD 40c was launched in 2008. This 40-foot modular data center has a maximum power capacity up to 27 kW per rack. The POD 40c supports 3,500 compute nodes or 12,000 LFF hard drives. HP has claimed this offers the computing equivalent of 4,000 square foot of traditional data center space. [4]
The POD 20c was launched in 2010. This modular data center is housed in a 20-foot container. This version houses 10 industry-standard 50U racks of hardware. The POD uses an efficient cooling design of variable speed fans, hot and cold aisle containment and close coupled cooling to maximize capacity and efficiency. The POD 20c can operate at a Power Usage Effectiveness of 1.25. [6] PODs can maintain cold aisle temperatures higher than typical brick and mortar data centers. [7] The temperature of the cold aisle in traditional data centers is typically 68 to 72 degrees, whereas the POD can efficiently operate at temperatures in this range up to 90 degrees. [6]
Both the 20c and the 40c are water-cooled. The benefit of water cooling is higher capacity and less power usage than traditional air-cooled systems. [8]
The HP POD 240a was launched in June 2011. [9] It can be configured with two rows of 44 extra height 50U racks that could house 4,400 server nodes of typical size, or 7,040 server nodes of the densest size. [10] HP claimed that the typical brick and mortar data center required to house this equipment would be 10,000 square feet of floor space. [10]
HP claims "near-perfect" Energy efficiency for the POD 240a, which it nicknames the "EcoPOD". [11] HP says it has recorded estimated Power Usage Effectiveness (PUE) ratios of 1.05 to 1.4, depending on IT load and location. [5] The perfect efficiency would be a Power usage effectiveness (PUE) of 1.0. [11]
The POD 240a has a refrigerant-based air cooled HVAC system with air-side economization [3] When the ambient air conditions are cool enough, the 240a uses economizer or free air mode—where outside air can be taken in and circulated inside the modular data center to cool the IT equipment. [5] [12] [13]
In September 2013, eBay announced that they were "deploying the world’s largest modular data center, with 44 rack positions and 1.4Megawatts of power" using HP EcoPODs. [14]
A colocation center or "carrier hotel", is a type of data centre where equipment, space, and bandwidth are available for rental to retail customers. Colocation facilities provide space, power, cooling, and physical security for the server, storage, and networking equipment of other firms and also connect them to a variety of telecommunications and network service providers with a minimum of cost and complexity.
A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.
A data center or data centre is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.
Sun Modular Datacenter is a portable data center built into a standard 20-foot intermodal container manufactured and marketed by Sun Microsystems. An external chiller and power were required for the operation of a Sun MD. A data center of up to 280 servers could be rapidly deployed by shipping the container in a regular way to locations that might not be suitable for a building or another structure, and connecting it to the required infrastructure. Sun stated that the system could be made operational for 1% of the cost of building a traditional data center.
A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.
Steele is a supercomputer that was installed at Purdue University on May 5, 2008. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Coates built in 2009, Rossmann built in 2010, and Hansen and Carter built in 2011. Steele was the largest campus supercomputer in the Big Ten outside a national center when built. It ranked 104th on the November 2008 TOP500 Supercomputer Sites list.
Liebert Corporation was a global manufacturer of power, precision cooling and infrastructure management systems for mainframe computer, server racks, and critical process systems headquartered in Westerville, Ohio. Founded in 1965, the company employed more than 1,800 people across 12 manufacturing plants worldwide. Since 2016, Liebert has been a subsidiary of Vertiv.
A point of delivery, or PoD, is "a module of network, compute, storage, and application components that work together to deliver networking services. The PoD is a repeatable design pattern, and its components maximize the modularity, scalability, and manageability of data centers."
Power usage effectiveness (PUE) or power unit efficiency is a ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment.
OVH, legally OVH Groupe SA, is a French cloud computing company which offers VPS, dedicated servers and other web services. As of 2016 OVH owned the world's largest data center in surface area. As of 2019, it was the largest hosting provider in Europe, and the third largest in the world based on physical servers. According to W3Techs, OVH has 3.4% of website data center market share in 2024. The company was founded in 1999 by the Klaba family and is headquartered in Roubaix, France. In 2019 OVH adopted OVHcloud as its public brand name.
A modular data center system is a portable method of deploying data center capacity. A modular data center can be placed anywhere data capacity is needed.
Amplidata is a privately-held cloud storage technology provider based in Lochristi, Belgium. In November 2010, Amplidata opened its U.S. headquarters in Redwood City, California. The research and development department has locations in Belgium and Egypt, while the sales and support departments are represented in a number of countries in Europe and North America.
HP Flexible Data Center, also termed FlexDC, is a modular data center built from prefabricated components by Hewlett-Packard and introduced in 2010. It is housed in five large buildings that form the shape of a butterfly. The Flexible DC looks like a traditional building, but it is fabricated off-site in order to circumvent the two years it often takes for traditional building construction. The building consists of a central admin area, surrounded by 1-4 data halls. FDC offers cooling options that are optimal for each type of climate.
Energy Logic is a vendor-neutral approach to achieving energy efficiency in data centers. Developed and initially released in 2007, the Energy Logic efficiency model suggests ten holistic actions – encompassing IT equipment as well as traditional data center infrastructure – guided by the principles dictated by the "Cascade Effect."
Simon Rohrich is an American inventor and entrepreneur who is the co-founder and Chief Technology Evangelist of Elliptical Mobile Solutions, a provider of mobile micro-modular data centers. He defined the specific technology type of his data centers in a 2010 white paper, coining the term micro modular data center (MMDC) and creating the specific technology. This technology has been used by numerous corporations and governmental entities. Rohrich is also known for his participation in the Armored Combat League (ACL) along with the Society for Creative Anachronism's armoured combat tournaments and events.
Immersion cooling is an IT cooling practice by which complete servers are immersed in a dielectric, electrically non-conductive fluid that has significantly higher thermal conductivity than air. Heat is removed from a system by putting the coolant in direct contact with hot components, and circulating the heated liquid through heat exchangers. This practice is highly effective because liquid coolants can absorb more heat from the system, and are more easily circulated through the system, than air. Immersion cooling has many benefits, including but not limited to: sustainability, performance, reliability and cost.
Close Coupled Cooling is a last generation cooling system particularly used in data centers. The goal of close coupled cooling is to bring heat transfer closest to its source: the equipment rack. By moving the air conditioner closer to the equipment rack a more precise delivery of inlet air and a more immediate capture of exhaust air is ensured.
A Wireless Data center is a type of data center that uses wireless communication technology instead of cables to store, process and retrieve data for enterprises. The development of Wireless Data centers arose as a solution to growing cabling complexity and hotspots. The wireless technology was introduced by Shin et al., who replaced all cables with 60 GHz wireless connections at the Cayley data center.
A green data center, or sustainable data center, is a service facility which utilizes energy-efficient technologies. They do not contain obsolete systems, and take advantage of newer, more efficient technologies.