Blade server

Last updated
Supermicro SBI-7228R-T2X blade server, containing two dual-CPU server nodes Supermicro SBI-7228R-T2X blade server.jpg
Supermicro SBI-7228R-T2X blade server, containing two dual-CPU server nodes

A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. [1] Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.

Contents

In a standard server-rack configuration, one rack unit or 1U 19 inches (480 mm) wide and 1.75 inches (44 mm) talldefines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014, densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems. [2]

Blade enclosure

The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.

HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below Enclosure proliant.jpg
HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below

Power

Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers often have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. [3] [4] This setup reduces the number of PSUs required to provide a resilient power supply.

The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS).

Cooling

During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans.

A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems [5] [6] that adjust to meet the system's cooling requirements.

At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack-mount servers. [7]

Networking

Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades, at least one interface is embedded on the motherboard and extra interfaces can be added using mezzanine cards.

A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades. [8] [9]

Storage

While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, E-SATA, SCSI, SAS DAS, FC and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is the Intel Modular Server System.

Other blades

Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure.

Systems administrators can use storage blades where a requirement exists for additional local storage. [10] [11] [12]

Uses

Cray XC40 supercomputer cabinet with 48 blades, each containing 4 nodes with 2 CPUs each High Performance Computing Center Stuttgart HLRS 2015 04 Cray XC40 Hazel Hen blades.jpg
Cray XC40 supercomputer cabinet with 48 blades, each containing 4 nodes with 2 CPUs each

Blade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade-server technology in theory allows for open, cross-vendor systems, most users buy modules, enclosures, racks and management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers; [13] [14] as of 2009 increasing numbers of third-party software vendors have started to enter this growing field. [15]

Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air conditioning problems that affect large conventional server farms.

History

Developers first placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s, soon after the introduction of 8-bit microprocessors. This architecture was used in the industrial process control industry as an alternative to minicomputer-based control systems. Early models stored programs in EPROM and were limited to a single function with a small real-time executive.

The VMEbus architecture (c.1981) defined a computer interface that included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing.

In the 1990s, the PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI called CompactPCI. CompactPCI was actually invented by Ziatech Corp of San Luis Obispo, CA and developed into an industry standard. Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one master board in charge, or two redundant fail-over masters coordinating the operation of the entire system. Moreover, this system architecture provided management capabilities not present in typical rack mount computers, much more like in ultra-high reliability systems, managing power supplies, cooling fans as well as monitoring health of other internal components.

Demands for managing hundreds and thousands of servers in the emerging Internet Data Centers where the manpower simply didn't exist to keep pace a new server architecture was needed. In 1998 and 1999 this new Blade Server Architecture was developed at Ziatech based on their Compact PCI platform to house as many as 14 "blade servers" in a standard 19" 9U high rack mounted chassis, allowing in this configuration as many as 84 servers in a standard 84 Rack Unit 19" rack. What this new architecture brought to the table was a set of new interfaces to the hardware specifically to provide the capability to remotely monitor the health and performance of all major replaceable modules that could be changed/replaced while the system was in operation. The ability to change/replace or add modules within the system while it is in operation is known as Hot-Swap. Unique to any other server system the Ketris Blade servers routed Ethernet across the backplane (where server blades would plug-in) eliminating more than 160 cables in a single 84 Rack Unit high 19" rack. For a large data center tens of thousands of Ethernet cables, prone to failure would be eliminated. Further this architecture provided the capabilities to inventory modules installed in the system remotely in each system chassis without the blade servers operating. This architecture enabled the ability to provision (power up, install operating systems and applications software) (e.g. a Web Servers) remotely from a Network Operations Center (NOC). The system architecture when this system was announced was called Ketris, named after the Ketri Sword, worn by nomads in such a way as to be drawn very quickly as needed. First envisioned by Dave Bottom and developed by an engineering team at Ziatech Corp in 1999 and demonstrated at the Networld+Interop show in May 2000. Patents were awarded for the Ketris blade server architecture[ citation needed ]. In October 2000 Ziatech was acquired by Intel Corp and the Ketris Blade Server systems would become a product of the Intel Network Products Group.[ citation needed ]

PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001. [16] This provided the first open architecture for a multi-server chassis.

The Second generation of Ketris would be developed at Intel as an architecture for the telecommunications industry to support the build out of IP base telecom services and in particular the LTE (Long Term Evolution) Cellular Network build-out. PICMG followed with this larger and more feature-rich AdvancedTCA specification, targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, the operating cost (manpower to manage and maintain) are dramatically lower, where operating cost often dwarf the acquisition cost for traditional servers. AdvancedTCA promote them for telecommunications customers, however in the real world implementation in Internet Data Centers where thermal as well as other maintenance and operating cost had become prohibitively expensive, this blade server architecture with remote automated provisioning, health and performance monitoring and management would be a significantly less expensive operating cost.[ clarification needed ]

The first commercialized blade-server architecture[ citation needed ] was invented by Christopher Hipp and David Kirkeby, and their patent was assigned to Houston-based RLX Technologies. [17] RLX, which consisted primarily of former Compaq Computer Corporation employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001. [18] RLX was acquired by Hewlett-Packard in 2005. [19]

The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card/board/blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis.

In 2011, research firm IDC identified the major players in the blade market as HP, IBM, Cisco, and Dell. [20] Other companies selling blade servers include Supermicro, Hitachi.

Blade models

Cisco UCS blade servers in a chassis CiscoUCS.JPG
Cisco UCS blade servers in a chassis

The prominent brands in the blade server market are Supermicro, Cisco Systems, HPE, Dell and IBM, though the latter sold its x86 server business to Lenovo in 2014 after selling its consumer PC line to Lenovo in 2005. [21]

In 2009, Cisco announced blades in its Unified Computing System product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It had a heavily modified Nexus 5K switch, rebranded as a fabric interconnect, and management software for the whole system. [22] HP's initial line consisted of two chassis models, the c3000 which holds up to 8 half-height ProLiant line blades (also available in tower form), and the c7000 (10U) which holds up to 16 half-height ProLiant blades. Dell's product, the M1000e is a 10U modular enclosure and holds up to 16 half-height PowerEdge blade servers or 32 quarter-height blades.

See also

Related Research Articles

<span class="mw-page-title-main">Backplane</span> Group of electrical connectors specifically aligned

A backplane or backplane system is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications.

<span class="mw-page-title-main">Eurocard (printed circuit board)</span> Standard for PCBs which may be interconnected in a rack mounted chassis

Eurocard is an IEEE standard format for printed circuit board (PCB) cards that can be plugged together into a standard chassis which, in turn, can be mounted in a 19-inch rack. The chassis consists of a series of slotted card guides on the top and bottom, into which the cards are slid so they stand on end, like books on a shelf. At the spine of each card is one or more connectors which plug into mating connectors on a backplane that closes the rear of the chassis.

<span class="mw-page-title-main">Motherboard</span> Main printed circuit board used for a computing device

A motherboard is the main printed circuit board (PCB) in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems, such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general use.

<span class="mw-page-title-main">Single-board computer</span> Computer whose components are on a single printed circuit board

A single-board computer (SBC) is a complete computer built on a single circuit board, with microprocessor(s), memory, input/output (I/O) and other features required of a functional computer. Single-board computers are commonly made as demonstration or development systems, for educational systems, or for use as embedded computer controllers. Many types of home computers or portable computers integrate all their functions onto a single printed circuit board.

HPE Integrity Servers is a series of server computers produced by Hewlett Packard Enterprise since 2003, based on the Itanium processor. The Integrity brand name was inherited by HP from Tandem Computers via Compaq.

<span class="mw-page-title-main">PICMG</span>

PICMG, or PCI Industrial Computer Manufacturers Group, is a consortium of over 140 companies in the fields of computer science and engineering. Founded in 1994, the group was originally formed to adapt PCI technology for use in high-performance telecommunications, military, and industrial computing applications, but its work has grown to include newer technologies. PICMG currently focuses on developing and implementing specifications and guidelines for open standards–based computer architectures from a wide variety of interconnects.

Advanced Telecommunications Computing Architecture is the largest specification effort in the history of the PCI Industrial Computer Manufacturers Group (PICMG), with more than 100 companies participating. Known as AdvancedTCA, the official specification designation PICMG 3.x was ratified by the PICMG organization in December 2002. AdvancedTCA is targeted primarily to requirements for "carrier grade" communications equipment, but has recently expanded its reach into more ruggedized applications geared toward the military/aerospace industries as well. This series of specifications incorporates the latest trends in high speed interconnect technologies, next-generation processors, and improved Reliability, Availability and Serviceability (RAS).

<span class="mw-page-title-main">PCI eXtensions for Instrumentation</span>

PCI eXtensions for Instrumentation (PXI) is one of several modular electronic instrumentation platforms in current use based on the Peripheral Component Interconnect bus, which includes PCI Express (PCI). These platforms are used as a basis for building electronic test equipment, automation systems, and modular laboratory instruments.

<span class="mw-page-title-main">ProLiant</span> Line of computer servers

ProLiant is a brand of server computers that was originally developed and marketed by Compaq, Hewlett-Packard (HP), and currently marketed by Hewlett Packard Enterprise (HPE). ProLiant servers were first introduced by Compaq in 1993, succeeding their SystemPro line of servers in the high-end space.

<span class="mw-page-title-main">HPE Superdome</span> Series of server computers

The HPE Superdome is a high-end server computer designed and manufactured by Hewlett Packard Enterprise. The product's most recent version, "Superdome 2," was released in 2010 supporting 2 to 32 sockets and 4 TB of memory. The Superdome used PA-RISC processors when it debuted in 2000. Since 2002, a second version of the machine based on Itanium 2 processors has been marketed as the HP Integrity Superdome.

<span class="mw-page-title-main">Stackable switch</span> Network switch which can operate together others

A stackable switch is a network switch that is fully functional operating standalone but which can also be set up to operate together with one or more other network switches, with this group of switches showing the characteristics of a single switch but having the port capacity of the sum of the combined switches.

The Cray CX1 is a deskside workstation designed by Cray Inc., based on the x86-64 processor architecture. It was launched on September 16, 2008, and was discontinued in early 2012. It comprises a single chassis blade server design that supports a maximum of eight modular single-width blades, giving up to 96 processor cores. Computational load can be run independently on each blade and/or combined using clustering techniques.

<span class="mw-page-title-main">HPE BladeSystem</span> Line of blade server machines by Hewlett Packard Enterprise

BladeSystem is a line of blade server machines from Hewlett Packard Enterprise that was introduced in June 2006.

CompactPCI PlusIO is an extension to the PICMG 2.0 CompactPCI industrial standard for modular computer systems. CompactPCI PlusIO was officially adopted by the PCI Industrial Computer Manufacturers Group PICMG as PICMG 2.30 CompactPCI PlusIO in November 2009. Being 100% compatible with CompactPCI, PICMG 2.30 defines a migration path to the future CompactPCI Serial standard. It defines a fixed rear I/O pin assignment that focuses on modern, fast serial point-to-point connections. The new technology succeeding parallel CompactPCI comprises both CompactPCI Serial and CompactPCI PlusIO.

CompactPCI Serial is an industrial standard for modular computer systems. It is based on the established PICMG 2.0 CompactPCI standard, which uses the parallel PCI bus for communication among a system's card components. In contrast to this, CompactPCI Serial uses only serial point-to-point connections. CompactPCI Serial was officially adopted by the PCI Industrial Computer Manufacturers Group PICMG as PICMG CPCI-S.0 CompactPCI Serial in March 2011. Its mechanical concept is based on the proven standards of IEEE 1101-1-1998 and IEEE 1101-10-1996. CompactPCI Serial includes different connectors that permit very high data rates. The new technology standard succeeding parallel CompactPCI comprises another specification called PICMG 2.30 CompactPCI PlusIO. This is why CompactPCI Serial and CompactPCI PlusIO as a whole were also called CompactPCI Plus. PICMG's first working title of CompactPCI Serial was CPLUS.0. CompactPCI Serial backplanes and chassis are developed by Schroff, Elmа, and Pixus Technologies companies, as for the CompactPCI Serial board level electronics – they are developed by MEN Mikro Elektronik, Fastwel, EKF, Emerson Embedded Computing, ADLINK, and Kontron.

<span class="mw-page-title-main">Dell M1000e</span> Server computer

The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches.

<span class="mw-page-title-main">Converged network adapter</span> Computer input/output device

A converged network adapter (CNA), also called a converged network interface controller (C-NIC), is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words, it "converges" access to, respectively, a storage area network and a general-purpose computer network.

Dell PowerEdge VRTX is a computer hardware product line from Dell. It is a mini-blade chassis with built-in storage system. The VRTX comes in two models: a 19" rack version that is 5 rack units high or as a stand-alone tower system.

<span class="mw-page-title-main">Modular crate electronics</span> Electronic used in particle generators

Modular crate electronics are a general type of electronics and support infrastructure commonly used for trigger electronics and data acquisition in particle detectors. These types of electronics are common in such detectors because all the electronic pathways are made by discrete physical cables connecting together logic blocks on the fronts of modules. This allows circuits to be designed, built, tested, and deployed very quickly as an experiment is being put together. Then the modules can all be removed and used again when the experiment is done.

MicroTCA is a modular, open standard, created and maintained by the PCI Industrial Computer Manufacturers Group (PICMG). It provides the electrical, mechanical, thermal and management specifications to create a switched fabric computer system, using Advanced Mezzanine Cards (AMC), connected directly to a backplane. MicroTCA is a descendant of the AdvancedTCA standard.

References

  1. "Data Center Networking – Connectivity and Topology Design Guide" (PDF). Enterasys Networks, Inc. 2011. Archived from the original (PDF) on 2013-10-05. Retrieved 2013-09-05.
  2. "HP updates Moonshot server platform with ARM and AMD Opteron hardware". Incisive Business Media Limited. 9 Dec 2013. Archived from the original on 16 April 2014. Retrieved 2014-04-25.
  3. "HP BladeSystem p-Class Infrastructure". Archived from the original on 2006-05-18. Retrieved 2006-06-09.
  4. Sun Blade Modular System
  5. Sun Power and Cooling
  6. "HP Thermal Logic technology" (PDF). Archived from the original (PDF) on 2007-01-23. Retrieved 2007-04-18.
  7. "HP BL2x220c". Archived from the original on 2008-08-29. Retrieved 2008-08-21.
  8. Sun Independent I/O
  9. HP Virtual Connect
  10. IBM BladeCenter HS21 Archived October 13, 2007, at the Wayback Machine
  11. "HP storage blade". Archived from the original on 2007-04-30. Retrieved 2007-04-18.
  12. Verari Storage Blade
  13. http://www.techspot.com/news/26376-intel-endorses-industrystandard-blade-design.html TechSpot
  14. "Dell calls for blade server standards". news.cnet.com. Archived from the original on 2011-12-26.
  15. https://www.theregister.co.uk/2009/04/07/ssi_blade_specs/ The Register
  16. PICMG specifications Archived 2007-01-09 at the Wayback Machine
  17. US 6411506, Hipp, Christopher &Kirkeby, David,"High density web server chassis system and method",published 2002-06-25, assigned to RLX Technologies
  18. "RLX helps data centres with switch to blades". ARN. October 8, 2001. Retrieved 2011-07-30.
  19. "HP Will Acquire RLX To Bolster Blades". www.informationweek.com. October 3, 2005. Archived from the original on January 3, 2013. Retrieved 2009-07-24.
  20. "Worldwide Server Market Revenues Increase 12.1% in First Quarter as Market Demand Continues to Improve, According to IDC" (Press release). IDC. 2011-05-24. Archived from the original on 2011-05-26. Retrieved 2015-03-20.
  21. "Transitioning x86 to Lenovo". IBM.com. Archived from the original on April 5, 2014. Retrieved 27 September 2014.
  22. "Cisco Unleashes the Power of Virtualization with Industry's First Unified Computing System". Press release. March 16, 2009. Archived from the original on March 21, 2009. Retrieved March 27, 2017.