IBM BladeCenter

Last updated
IBM BladeCenter
Bladecenter-front.jpg
BladeCenter E front side: 8 blade servers (HS20) followed by 6 empty slots
Also known asIBM eServer BladeCenter (2002-2005)
Developer IBM
Type Blade server
Release date2002 (2002)
Discontinued2012 (2012)
CPU x86 (HS/LS series)
POWER (JS/PS series)
CELL (QS series)
Successor IBM Flex System

The IBM BladeCenter was IBM's blade server architecture, until it was replaced by Flex System in 2012. The x86 division was later sold to Lenovo in 2014. [1]

Contents

BladeCenter E back side, showing on the left two FC switches and two Ethernet switches. On the right side a management module with VGA and PS/2 keyboard and mouse cables connected. Bladecenter-back.jpg
BladeCenter E back side, showing on the left two FC switches and two Ethernet switches. On the right side a management module with VGA and PS/2 keyboard and mouse cables connected.
Magerit supercomputer (CeSViMa) has 86 Blade Centers (6 Blade Center E on each computing rack) UPM-CeSViMa-SupercomputadorMagerit.jpg
Magerit supercomputer (CeSViMa) has 86 Blade Centers (6 Blade Center E on each computing rack)

History

Introduced in 2002, based on engineering work started in 1999, the IBM eServer BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options.

The naming was changed to IBM BladeCenter in 2005. In February 2006, IBM introduced the BladeCenter H with switch capabilities for 10 Gigabit Ethernet and InfiniBand 4X.

A web site called Blade.org was available for the blade computing community through about 2009. [2]

In 2012, the replacement Flex System was introduced.

Enclosures

IBM BladeCenter (E)

The original IBM BladeCenter was later marketed as BladeCenter E. [3] Power supplies have been upgraded through the life of the chassis from the original 1200 to 1400, 1800, 2000 and 2320 watt.

The BladeCenter (E) was co-developed by IBM and Intel and included:

IBM BladeCenter T

BladeCenter T is the telecommunications company version [4] of the original BladeCenter, available with either AC or DC (48 V) [5] power. Has 8 blade slots in 8U, but uses the same switches and blades as the regular BladeCenter E. To keep NEBS Level 3 / ETSI compliant special Network Equipment-Building System (NEBS) compliant blades are available.

IBM BladeCenter H

BladeCenter H BladecenterH.jpg
BladeCenter H

Upgraded BladeCenter design with high-speed fabric options, announced in 2006. [6] Backwards compatible with older BladeCenter switches and blades. Features: [7]

IBM BladeCenter HT

BladeCenter HT chassis IBM bladecenter.png
BladeCenter HT chassis

BladeCenter HT is the telecommunications company version [9] of the BladeCenter H, available with either AC or DC (48 V) power. Has 12 blade slots in 12U, but uses the same switches and blades as the regular BladeCenter H. But to keep NEBS Level 3 / ETSI compliant special NEBS compliant blades are available.

IBM BladeCenter S

Targets mid-sized customers by offering storage inside the BladeCenter chassis, so no separate external storage needs to be purchased. It can also use 120 V power in the North American market, so it can be used outside the datacenter. When running at 120 V, the total chassis capacity is reduced. Features: [10]

Blade nodes list

IBM BladeCenter blade nodes list
widesockets20022003200420052006200720082009201020112012
x86Intel21-4HS40
1(2)2(4)HX5
11-2HS20HS21HS22HS23
11HS12
HC10
AMD21-4LS41LS42
11-2LS20LS21
PPOWER24JS43 Exp [11] PS704
22 PS702
12JS22 [12] JS23 [11] PS703
11JS12 [13] PS701
PS700
PowerPC12JS20 [14] JS21
Cell22QS20 [15]
1QS21 [16] QS22 [17]
UltraSPARC2BC
Network11PN41

Intel based

Modules based on x86 processors from Intel.

HS12

(2008) Features:

HS20

(2002–2006) Features:

Inside of IBM HS20 blade. Two 2.5 inch disk drive bays are unoccupied. IBM HS20 blade server.jpg
Inside of IBM HS20 blade. Two 2.5 inch disk drive bays are unoccupied.

HS21

(2007–2008) This model can use the High-speed IO option of the BladeCenter H, but is backwards-compatible with the regular BladeCenter. Features:

HS21 XM

(2007–2008) This model can use the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter. Features:

HS22

(2009–2011) Features:

HS22v

(2010–2011) Features are very similar to HS22 but:

HS23

(2012) Features:

HS23E

(2012) Features:

HS40

(2004) Features:

HC10

(2008) This blade model is targeted to the workstation market, Features:

HX5

(2010–2011) This blade model is targeted at the server virtualization market. Features:

AMD based

Modules based on x86 processors from AMD.

LS20

(2005-2006) Features:

LS21

Inside of IBM LS21 blade. Small circuit board visible on the bottom right is an optional Fibre Channel daughter card. Blade-LS21-inside.jpg
Inside of IBM LS21 blade. Small circuit board visible on the bottom right is an optional Fibre Channel daughter card.

(2006) This model can use the high-speed I/O of the BladeCenter H, but is also backwards compatible with the regular BladeCenter. Features:

LS22

(2008) Upgraded model of LS21. Features:

LS41

(2006–2007) This model can use the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter. Features:

LS42

(2008–2009) Upgraded model of LS41. Features:

Power based

Modules based on PowerPC- or Power ISA-based processors from IBM.

JS20

(2006) Features: [18]

JS21

(2006) This model can have the High-speed IO option of the BladeCenter H, but is backwards compatible with the regular BladeCenter. Features:

JS22

(2009) Features:

JS23

(2009) Features:

JS43 Express

Features:

JS12 Express

Features:

PS700

Branded as part of IBM Power Systems. Features:

PS701

Features are very similar to PS700, but

PS702

Think two PS701 tied together back-to-back, forming a double-wide blade

PS703

Features are very similar to PS701, but

PS704

Think two PS703 tied together back-to-back, forming a double-wide blade.

Cell based

Modules based on Cell processors from IBM.

QS20

Features:

QS21

Features:

QS22

Features:

UltraSPARC based: 2BC

Themis computer announced a blade around 2008. It ran the Sun Solaris operating system from Sun Microsystems. Each module had one UltraSPARC T2 with 64 threads at 1.2  GHz and up to 32 GB of DDR2 SDRAM processor memory. [20]

Advanced network: PN41

Developed in conjunction with CloudShield, features: [21]

Modules

Switch modules

The BladeCenter can have a total of four switch modules, but two of the switch module bays can take only an Ethernet switch or Ethernet pass-though. To use the other switch module bays, a daughtercard needs to be installed on each blade that needs it, to provide the required SAN, Ethernet, InfiniBand or Myrinet function. Mixing of different type daughtercards in the same BladeCenter chassis is not allowed.

Gigabit Ethernet

Gigabit Ethernet switch modules were produced by IBM, Nortel, and Cisco Systems. BLADE Network Technologies produced some switches, and later was purchased by IBM. In all cases speed internal to the BladeCenter, between the blades, is non-blocking. External Gigabit Ethernet ports vary from four to six and can be either copper or optical fiber.

Storage Area Network

A variety of SAN switch modules have been produced by QLogic, Cisco, McData (acquired by Brocade) and Brocade ranging in speeds of 1, 2, 4 and 8 Gbit Fibre Channel. Speed from the SAN switch to the blade is determined by the lowest-common-denominator between the blade HBA daughtercard and the SAN switch. External port counts vary from two to six, depending on the switch module.

InfiniBand

A InfiniBand switch module has been produced by Cisco. Speed from the blade InfiniBand daughtercard to the switch is limited to IB 1X (2.5 Gbit). Externally the switch has one IB 4X and one IB 12X port. The IB 12X port can be split to three IB 4X ports, giving a total of four IB 4X ports and a total theoretical external bandwidth of 40 Gbit.

Pass-through

Two kinds of pass-through module are available: copper pass-through and fibre pass-through. The copper pass-through can be used only with Ethernet, while the Fibre pass-through can be used for Ethernet, SAN or Myrinet.

Bridge

Bridge modules are only compatible with BladeCenter H and BladeCenter HT. They function like Ethernet or SAN switches and bridge the traffic to InfiniBand. The advantage is that from the Operating System on the blade everything seems normal (regular Ethernet or SAN connectivity), but inside the BladeCenter everything gets routed over the InfiniBand.

High-speed switch modules

High-speed switch modules are compatible only with the BladeCenter H and BladeCenter HT. A blade that needs the function must have a high-speed daughtercard installed. Different high-speed daughtercards cannot be mixed in the same BladeCenter chassis.

10 Gigabit Ethernet

A 10 Gigabit Ethernet switch module was available from BLADE Network Technologies. This allowed 10 Gbit/s connection to each blade, and to outside the BladeCenter.

InfiniBand 4X

There are several InfiniBand options:

  • A high-speed InfiniBand 4X SDR switch module from Cisco. This allows IB 4X connectivity to each blade. Externally the switch has two IB 4X ports and two IB 12X ports. The 12X ports can be split to three 4X ports, providing a total of eight IB 4X ports or a theoretical bandwidth of 80 Gbit. Internally between the blades, the switch is non-blocking.
  • A High-speed InfiniBand pass-through module to directly connect the blades to an external InfiniBand switch. This pass-though module is compatible with both SDR and DDR InfiniBand speeds.
  • A high-speed InfiniBand 4X QDR switch module from Voltaire (later acquired by Mellanox Technologies). This allows full IB 4X QDR connectivity to each blade. Externally the switch has 16 QSFP ports, all 4X QDR capable.

Roadrunner TriBlade (custom module)

A schematic description of the TriBlade module TriBlade.png
A schematic description of the TriBlade module

The IBM Roadrunner supercomputer used a custom module called the TriBlade from 2008 through 2013. An expansion blade connects two QS22 modules with 8 GB RAM each via 4 PCIe x8 links to a LS21 module with 16 GB RAM, two links for each QS22. It also provides outside connectivity via an Infiniband 4x DDR adapter. This makes a total width of four slots for a single TriBlade. Three TriBlades fit into one BladeCenter H chassis. [22]

See also

Related Research Articles

Myrinet, ANSI/VITA 26-1998, is a high-speed local area networking system designed by the company Myricom to be used as an interconnect between multiple machines to form computer clusters.

<span class="mw-page-title-main">InfiniBand</span> Network standard

InfiniBand (IB) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be scalable and uses a switched fabric network topology. By 2014, it was the most commonly used interconnect in the TOP500 list of supercomputers, until about 2016.

<span class="mw-page-title-main">Quadrics (company)</span>

Quadrics was a supercomputer company formed in 1996 as a joint venture between Alenia Spazio and the technical team from Meiko Scientific. They produced hardware and software for clustering commodity computer systems into massively parallel systems. Their highpoint was in June 2003 when six out of the ten fastest supercomputers in the world were based on Quadrics' interconnect. They officially closed on June 29, 2009.

<span class="mw-page-title-main">Small Form-factor Pluggable</span> Modular communications interface

Small Form-factor Pluggable (SFP) is a compact, hot-pluggable network interface module format used for both telecommunication and data communications applications. An SFP interface on networking hardware is a modular slot for a media-specific transceiver, such as for a fiber-optic cable or a copper cable. The advantage of using SFPs compared to fixed interfaces is that individual ports can be equipped with different types of transceivers as required, with the majority including optical line terminals, network cards, switches and routers.

<span class="mw-page-title-main">Altix</span> Supercomputer family

Altix is a line of server computers and supercomputers produced by Silicon Graphics, based on Intel processors. It succeeded the MIPS/IRIX-based Origin 3000 servers.

HPE Integrity Servers is a series of server computers produced by Hewlett Packard Enterprise since 2003, based on the Itanium processor. The Integrity brand name was inherited by HP from Tandem Computers via Compaq.

Brocade was an American technology company specializing in storage networking products, now a subsidiary of Broadcom Inc. The company is known for its Fibre Channel storage networking products and technology. Prior to the acquisition, the company expanded into adjacent markets including a wide range of IP/Ethernet hardware and software products. Offerings included routers and network switches for data center, campus and carrier environments, IP storage network fabrics; Network Functions Virtualization (NFV) and software-defined networking (SDN) markets such as a commercial edition of the OpenDaylight Project controller; and network management software that spans physical and virtual devices.

<span class="mw-page-title-main">Roadrunner (supercomputer)</span>

Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.

40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second (Gbit/s), respectively. These technologies offer significantly higher speeds than 10 Gigabit Ethernet. The technology was first defined by the IEEE 802.3ba-2010 standard and later by the 802.3bg-2011, 802.3bj-2014, 802.3bm-2015, and 802.3cd-2018 standards. The first succeeding Terabit Ethernet specifications were approved in 2017.

The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portfolio will be detailed on the Dell Networking page.

The Cisco Nexus series switches are modular and fixed port network switches designed for the data center. Cisco Systems introduced the Nexus Series of switches on January 28, 2008. The first chassis in the Nexus 7000 family is a 10-slot chassis with two supervisor engine slots and eight I/O module slots at the front, as well as five crossbar switch fabric modules at the rear. Beside the Nexus 7000 there are also other models in the Nexus range.

<span class="mw-page-title-main">HPE BladeSystem</span> Line of blade server machines by Hewlett Packard Enterprise

BladeSystem is a line of blade server machines from Hewlett Packard Enterprise that was introduced in June 2006.

IBM Storwize systems were virtualizing RAID computer data storage systems with raw storage capacities up to 32 PB. Storwize is based on the same software as IBM SAN Volume Controller (SVC).

The Intel Modular Server System is a blade system manufactured by Intel using their own motherboards and processors. The Intel Modular Server System consists of an Intel Modular Server Chassis, up to six diskless Compute Blades, an integrated storage area network (SAN), and three to five Service Modules. The system was formally announced in January 2008. The server is aimed at small to medium businesses with "50 to 300 employees".

<span class="mw-page-title-main">LIO (SCSI target)</span> Open-source version of SCSI target

In computing, Linux-IO (LIO) Target is an open-source implementation of the SCSI target that has become the standard one included in the Linux kernel. Internally, LIO does not initiate sessions, but instead provides one or more Logical Unit Numbers (LUNs), waits for SCSI commands from a SCSI initiator, and performs required input/output data transfers. LIO supports common storage fabrics, including FCoE, Fibre Channel, IEEE 1394, iSCSI, iSCSI Extensions for RDMA (iSER), SCSI RDMA Protocol (SRP) and USB. It is included in most Linux distributions; native support for LIO in QEMU/KVM, libvirt, and OpenStack makes LIO also a storage option for cloud deployments.

<span class="mw-page-title-main">Dell M1000e</span> Server computer

The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches.

Dell Networking is the name for the networking portfolio of Dell. In the first half of 2013, Dell started to rebrand their different existing networking product brands to Dell Networking. Dell Networking is the name for the networking equipment that was known as Dell PowerConnect, as well as the Force10 portfolio.

<span class="mw-page-title-main">Leonardo (supercomputer)</span> Supercomputer in Italy

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.

References

  1. Kunert, Paul (23 January 2014). "It was inevitable: Lenovo stumps up $2.3bn for IBM System x server biz". channelregister.co.uk. The Register. Retrieved 23 January 2014.
  2. "Blade Server Information from Blade.org". Archived from the original on August 16, 2009. Retrieved July 18, 2013.
  3. "IBM BladeCenter E chassis specifications". IBM. 2007-02-05. Archived from the original on 2012-10-03.
  4. "IBM BladeCenter T chassis specifications". IBM. 2006-01-17. Archived from the original on 2012-11-05.
  5. "Connecting The Bladecenter T Type 8720 To Dc Power - IBM BladeCenter T 8720 Installation And User Manual [Page 34] | ManualsLib". www.manualslib.com. Retrieved 2020-12-27.
  6. "IBM BladeCenter H Chassis delivers high performance, extreme reliability, and ultimate flexibility". www-01.ibm.com. 2006-02-09. Retrieved 2020-12-27.
  7. "IBM BladeCenter H chassis specifications". IBM. 2008-10-07. Archived from the original on 2012-10-03.
  8. "IBM BladeCenter H および関連オプションの発表". www-01.ibm.com (in Japanese). 2008-11-19. Retrieved 2020-12-23.
  9. "IBM BladeCenter HT chassis specifications". IBM. 2008-01-26. Archived from the original on 2012-10-03.
  10. "IBM BladeCenter S chassis specifications". IBM. 2008-10-07. Archived from the original on 2012-10-03.
  11. 1 2 "IBM BladeCenter JS23 and JS43 Express servers" (PDF).
  12. "IBM BladeCenter JS22 server combines excellent processing power with the scalability, reliability". www-01.ibm.com. 2007-11-06. Retrieved 2020-12-27.
  13. "IBM BladeCenter JS12 Express server combines excellent processing power with the scalability, relia". www-01.ibm.com. 2008-04-02. Retrieved 2020-12-27.
  14. "IBM eServer BladeCenter JS20 -- Fast 2.2 GHz SMP processor brings more power to the BladeCenter". www-01.ibm.com. 2004-10-12. Retrieved 2020-12-27.
  15. "IBM BladeCenter QS20 blade with new Cell BE processor offers unique capabilities for". www-01.ibm.com. 2006-09-12. Retrieved 2020-12-27.
  16. "IBM BladeCenter QS21 boosts performance through innovative solutions for visually or numerically in". www-01.ibm.com. 2007-08-28. Retrieved 2020-12-27.
  17. "IBM BladeCenter QS22 Sales Guide" (PDF). May 2008.
  18. "Overview - IBM BladeCenter JS20". www.ibm.com. 2008-10-06. Retrieved 2020-12-25.
  19. 1 2 3 4 "IBM Knowledge Center". www.ibm.com. 7 May 2020. Retrieved 2020-12-18.
  20. "T2BC Blade Servers". Themis Computer. Archived from the original on June 5, 2008. Retrieved July 18, 2013.
  21. "IBM PN41 network blade". IBM. 2008-08-27. Archived from the original on 2012-11-05. Retrieved 2009-06-15.
  22. Ken Koch (March 13, 2008). "Roadrunner Platform Overview" (PDF). Los Alamos National Laboratory. Archived from the original (PDF) on October 23, 2013. Retrieved July 18, 2013.
  23. Montoya, Susan (March 30, 2013). "End of the Line for Roadrunner Supercomputer". The Associated Press. Archived from the original on April 2, 2015. Retrieved July 18, 2013.
IBM BladeCenter
2000 - 2012
eServer-
p Series:- JS/QS nodes
2004
PS nodes
2009
x Series:x86 HS/HX/JS/JX nodes
2000
Succeeded by
Succeeded by