Launched | Q4 2024 |
---|---|
Designed by | Nvidia |
Manufactured by | |
Fabrication process | TSMC 4NP (Datacenter [1] ) TSMC 4N (Consumer [2] ) |
Codename(s) | GB100 GB20x |
Product Series | |
Desktop | |
Specifications | |
Memory support | GDDR7 (Consumer) HBM3e (Datacenter) |
PCIe support | PCIe 5.0 (Consumer) PCIe 6.0 (Datacenter) |
Supported Graphics APIs | |
DirectX | DirectX 12 Ultimate (Feature Level 12_2) |
Direct3D | Direct3D 12 |
Shader Model | Shader Model 6.8 |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
Vulkan | Vulkan 1.4 |
Supported Compute APIs | |
CUDA | Compute Capability 10.x Compute Capability 12.x |
DirectCompute | Yes |
Media Engine | |
Encoder(s) supported | NVENC |
History | |
Predecessor | Ada Lovelace (consumer) Hopper (datacenter) |
Successor | Rubin |
Blackwell is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Hopper and Ada Lovelace microarchitectures.
Named after statistician and mathematician David Blackwell, the name of the Blackwell architecture was leaked in 2022 with the B40 and B100 accelerators being confirmed in October 2023 with an official Nvidia roadmap shown during an investors presentation. [3] It was officially announced at Nvidia's GTC 2024 keynote on March 18, 2024. [4]
In March 2022, Nvidia announced the Hopper datacenter architecture for AI accelerators. Demand for Hopper products was high throughout 2023's AI hype. [5] The lead time from order to delivery of H100-based servers was between 36 and 52 weeks due to shortages and high demand. [6] Nvidia reportedly sold 500,000 Hopper-based H100 accelerators in Q3 2023 alone. [6] Nvidia's AI dominance with Hopper products led to the company increasing its market capitalization to over $2 trillion, behind only Microsoft and Apple. [7]
The Blackwell architecture is named after American mathematician David Blackwell who was known for his contributions to the mathematical fields of game theory, probability theory, information theory, and statistics. These areas have influenced or are implemented in transformer-based generative AI model designs or their training algorithms. Blackwell was the first African American scholar to be inducted into the National Academy of Sciences. [8]
In Nvidia's October 2023 Investor Presentation, its datacenter roadmap was updated to include reference to its B100 and B40 accelerators and the Blackwell architecture. [9] [10] Previously, the successor to Hopper was simply named on roadmaps as "Hopper-Next". Nvidia's updated roadmap emphasized the move from a two-year release cadence for datacenter products to yearly releases targeted for x86 and ARM systems.
At the Graphics Technology Conference (GTC) on March 18, 2024, Nvidia officially announced the Blackwell architecture with focus placed on its B100 and B200 datacenter accelerators and associated products, such as the eight-GPU HGX B200 board and the 72-GPU NVL72 rack-scale system. [11] Nvidia CEO Jensen Huang said that with Blackwell, "we created a processor for the generative AI era" and emphasized the overall Blackwell platform combining Blackwell accelerators with Nvidia's ARM-based Grace CPU. [12] [13] Nvidia touted endorsements of Blackwell from the CEOs of Google, Meta, Microsoft, OpenAI and Oracle. [13] The keynote did not mention gaming.
It was reported in October 2024 that there was a design flaw in the Blackwell architecture that had been fixed in collaboration with TSMC. [14] According to Huang, the design flaw was "functional" and "caused the yield[s] to be low". [15] By November 2024, Morgan Stanley was reporting that "the entire 2025 production" of Blackwell silicon was "already sold out". [16]
During the company's CES 2025 keynote, Nvidia announced that the foundation models for Blackwell will include models from Black Forest Labs (Flux), Meta AI, Mistral AI, and Stability AI. [17]
Blackwell is an architecture designed for both datacenter compute applications and for gaming and workstation applications with dedicated dies for each purpose.
Blackwell is fabricated on the custom 4NP node from TSMC. 4NP is an enhancement of the 4N node used for the Hopper and Ada Lovelace architectures. The Nvidia-specific 4NP process likely adds metal layers to the standard TSMC N4P technology. [18] The GB100 die contains 104 billion transistors, a 30% increase over the 80 billion transistors in the previous generation Hopper GH100 die. [19] As Blackwell cannot reap the benefits that come with a major process node advancement, it must achieve power efficiency and performance gains through underlying architectural changes. [20]
The GB100 die is at the reticle limit of semiconductor fabrication. [21] The reticle limit in semiconductor fabrication is the maximum size of features that lithography machines can etch into a silicon die. Previously, Nvidia had nearly hit TSMC's reticle limit with GH100's 814 mm2 die. In order to not be constrained by die size, Nvidia's B100 accelerator utilizes two GB100 dies in a single package, connected with a 10 TB/s link that Nvidia calls the NV-High Bandwidth Interface (NV-HBI). NV-HBI is based on the NVLink 5.0 protocol. Nvidia CEO Jensen Huang claimed in an interview with CNBC that Nvidia had spent around $10 billion in research and development for Blackwell's NV-HBI die interconnect. Veteran semiconductor engineer Jim Keller, who had worked on AMD's K7, K12 and Zen architectures, criticized this figure and claimed that the same outcome could be achieved for $1 billion through using Ultra Ethernet rather than the proprietary NVLink system. [22] The two connected GB100 dies are able to act like a large monolithic piece of silicon with full cache coherency between both dies. [23] The dual die package totals 208 billion transistors. [21] Those two GB100 dies are placed on top of a silicon interposer produced using TSMC's CoWoS-L 2.5D packaging technique. [24]
On the consumer side, Blackwell's largest die, GB202, measures in at 750mm2 which is 20% larger than AD102, Ada Lovelace's largest die. [25] GB202 contains a total of 24,576 CUDA cores, 28.5% more than the 18,432 CUDA cores in AD102. GB202 is the largest consumer die designed by Nvidia since the 754mm2 TU102 die in 2018, based on the Turing microarchitecture. The gap between GB202 and GB203 has also gotten much wider compared to previous generations. GB202 features more than double the number of CUDA cores than GB203 which was not the case with AD102 over AD103.
CUDA Compute Capability 10.0 and Compute Capability 12.0 are added with Blackwell. [26]
The Blackwell architecture introduces fifth-generation Tensor Cores for AI compute and performing floating-point calculations. In the data center, Blackwell adds native support for sub-8-bit data types, including new Open Compute Project (OCP) community-defined MXFP6 and MXFP4 microscaling formats to improve efficiency and accuracy in low-precision computations. [27] [28] [29] [30] [31] The previous Hopper architecture introduced the Transformer Engine, software to facilitate quantization of higher-precision models (e.g., FP32) to lower precision, for which Hopper has greater throughput. Blackwell's second-generation Transformer Engine adds support for MXFP4 and MXFP6. Using 4-bit data allows greater efficiency and throughput for model inference during generative AI training. Nvidia claims 20 petaflops (excluding the 2x gain the company claims for sparsity) of FP4 compute for the dual-GPU GB200 superchip. [32]
Datacenter
Die | GB100 | |
---|---|---|
Variant(s) | ||
Release date | Dec 2024 | |
Cores | CUDA Cores | |
TMUs | ||
ROPs | ||
RT Cores | ||
Tensor Cores | ||
Streaming Multiprocessors | ||
Cache | L1 | |
L2 | ||
Memory interface | 8192-bit | |
Die size | ||
Transistor count | 104 bn. | |
Transistor density | ||
Package socket | SXM6 | |
Products | B100 B200 | |
Consumer
Die | GB202 | GB203 | GB205 | GB206 | GB207 | |
---|---|---|---|---|---|---|
Variant(s) | GB202-400-A1 | GB203-200-A1 GB203-400-A1 | GB205-300-A1 | |||
Release date | Jan 30, 2025 | Jan 30, 2025 | Feb 2025 | Mar 2025 | TBA | |
Cores | CUDA Cores | 24,576 | 10,752 | 6,400 | 4,608 | 2,560 |
TMUs | 768 | 336 | 200 | 144 | 80 | |
ROPs | 192 | 112 | 80 | 48 | 32 | |
RT Cores | 192 | 84 | 50 | 36 | 20 | |
Tensor Cores | 768 | 336 | 200 | 144 | 80 | |
SMs | 192 | 84 | 50 | 36 | 20 | |
GPCs | 12 | 7 | 5 | 3 | 2 | |
Cache | L1 | 24 MB | 10.5 MB | 6.25 MB | 4.5 MB | 2.5 MB |
L2 | 128 MB | 64 MB | 48 MB | 32 MB | 32 MB | |
Memory interface | 512-bit | 256-bit | 192-bit | 128-bit | 128-bit | |
Die size | 750 mm2 | 378 mm2 | 263 mm2 | |||
Transistor count | 92.2 bn. | 45.6 bn. | 31.1 bn. | |||
Transistor density | 122.6 MTr/mm2 | 120.6 MTr/mm2 | 118.3 MTr/mm2 | |||
Products | ||||||
Consumer | Desktop | RTX 5090 RTX 5090 D | RTX 5070 Ti RTX 5080 | RTX 5070 | ||
Mobile | RTX 5080 Laptop RTX 5090 Laptop | RTX 5070 Ti Laptop | RTX 5070 Laptop | |||
Workstation | Desktop | |||||
Mobile | ||||||