Nvidia DGX

Last updated

A rack containing five DGX-1 supercomputers NetApp ONTAP AI.jpg
A rack containing five DGX-1 supercomputers

The Nvidia DGX represents a series of servers and workstations designed by Nvidia, primarily geared towards enhancing deep learning applications through the use of General-Purpose computing on Graphics Processing Units (GPGPU). These systems typically come in a rackmount format featuring high-performance x86 server CPUs on the motherboard.

Contents

The core feature of a DGX system is its inclusion of 4 to 8 Nvidia Tesla GPU modules, which are housed on an independent system board. These GPUs can be connected either via a version of the SXM socket or a PCIe x16 slot, facilitating flexible integration within the system architecture. To manage the substantial thermal output, DGX units are equipped with heatsinks and fans designed to maintain optimal operating temperatures.

This framework makes DGX units suitable for computational tasks associated with artificial intelligence and machine learning models.

Models

Pascal - Volta

DGX-1

DGX-1 servers feature 8 GPUs based on the Pascal or Volta daughter cards [1] with 128 GB of total HBM2 memory, connected by an NVLink mesh network. [2] The DGX-1 was announced on the 6th of April in 2016. [3] All models are based on a dual socket configuration of Intel Xeon E5 CPUs, and are equipped with the following features.

  • 512 GB of DDR4-2133
  • Dual 10 Gb networking
  • 4 x 1.92 TB SSDs
  • 3200W of combined power supply capability
  • 3U Rackmount Chassis

The product line is intended to bridge the gap between GPUs and AI accelerators in that the device has specific features for deep learning workloads. [4] The initial Pascal based DGX-1 delivered 170 teraflops of half precision processing, [5] while the Volta-based upgrade increased this to 960 teraflops. [6]

The DGX-1 was first available only with the Pascal based configuration, with the first generation SXM socket. The later revision of the DGX-1 offered support for first generation Volta cards via the SXM-2 socket. Nvidia offered upgrade kits that allowed users with a Pascal based DGX-1 to upgrade to a Volta based DGX-1. [7] [8]

  • The Pascal base DGX-1 has two variants, one with a 16 core Intel Xeon E5-2698 V3, and one with a 20 core E5-2698 V4. Pricing for the variant equipped with an E5-2698 V4 is unavailable, the Pascal based DGX-1 with an E5-2698 V3 was priced at launch at $129,000 [9]
  • The Volta based DGX-1 is equipped with an E5-2698 V4 and was priced at launch at $149,000. [9]

DGX Station

Designed as a turnkey deskside AI supercomputer, the DGX Station is a tower computer that can function completely independently without typical datacenter infrastructure such as cooling, redundant power, or 19 inch racks.

The DGX station was first available with the following specifications. [10]

  • Four Volta-based Tesla V100 accelerators, each with 16 GB of HBM2 memory
  • 480 TFLOPS FP16
  • Single Intel Xeon E5-2698 v4 [11]
  • 256 GB DDR4
  • 4x 1.92 TB SSDs
  • Dual 10 Gb Ethernet

The DGX station is water-cooled to better manage the heat of almost 1500W of total system components, this allows it to keep a noise range under 35 dB under load. [12] This, among other features, made this system a compelling purchase for customers without the infrastructure to run rackmount DGX systems, which can be loud, output a lot of heat, and take up a large area. This was Nvidia's first venture into bringing high performance computing deskside, which has since remained a prominent marketing strategy for Nvidia. [13]

DGX-2

The successor of the Nvidia DGX-1 is the Nvidia DGX-2, which uses sixteen Volta-based V100 32 GB (second generation) cards in a single unit. It was announced on 27 March in 2018. [14] The DGX-2 delivers 2 Petaflops with 512 GB of shared memory for tackling massive datasets and uses NVSwitch for high-bandwidth internal communication. DGX-2 has a total of 512 GB of HBM2 memory, a total of 1.5 TB of DDR4. Also present are eight 100 Gb/sec InfiniBand cards and 30.72 TB of SSD storage, [15] all enclosed within a massive 10U rackmount chassis and drawing up to 10 kW under maximum load. [16] The initial price for the DGX-2 was $399,000. [17]

The DGX-2 differs from other DGX models in that it contains two separate GPU daughterboards, each with eight GPUs. These boards are connected by an NVSwitch system that allows for full bandwidth communication across all GPUs in the system, without additional latency between boards. [16]

A higher performance variant of the DGX-2, the DGX-2H, was offered as well. The DGX-2H replaced the DGX-2's dual Intel Xeon Platinum 8168's with upgraded dual Intel Xeon Platinum 8174's. This upgrade does not increase core count per system, as both CPUs are 24 cores, nor does it enable any new functions of the system, but it does increase the base frequency of the CPUs from 2.7 GHz to 3.1 GHz. [18] [19] [20]

Ampere

DGX A100 Server

Announced and released on May 14, 2020. The DGX A100 was the 3rd generation of DGX server, including 8 Ampere-based A100 accelerators. [21] Also included is 15 TB of PCIe gen 4 NVMe storage, [22] 1 TB of RAM, and eight Mellanox-powered 200 GB/s HDR InfiniBand ConnectX-6 NICs. The DGX A100 is in a much smaller enclosure than its predecessor, the DGX-2, taking up only 6 Rack units. [23]

The DGX A100 also moved to a 64 core AMD EPYC 7742 CPU, the first DGX server to not be built with an Intel Xeon CPU. The initial price for the DGX A100 Server was $199,000. [21]

DGX Station A100

As the successor to the original DGX Station, the DGX Station A100, aims to fill the same niche as the DGX station in being a quiet, efficient, turnkey cluster-in-a-box solution that can be purchased, leased, or rented by smaller companies or individuals who want to utilize machine learning. It follows many of the design choices of the original DGX station, such as the tower orientation, single socket CPU mainboard, a new refrigerant-based cooling system, and a reduced number of accelerators compared to the corresponding rackmount DGX A100 of the same generation. [13] The price for the DGX Station A100 320G is $149,000 and $99,000 for the 160G model, Nvidia also offers Station rental at ~$9000 USD per month through partners in the US (rentacomputer.com) and Europe (iRent IT Systems) to help reduce the costs of implementing these systems at a small scale. [24] [25]

The DGX Station A100 comes with two different configurations of the built in A100.

  • Four Ampere-based A100 accelerators, configured with 40 GB (HBM) or 80 GB (HBM2e) memory,
    thus giving a total of 160 GB or 320 GB resulting either in DGX Station A100 variants 160G or 320G.
  • 2.5 PFLOPS FP16
  • Single 64 Core AMD EPYC 7742
  • 512 GB DDR4
  • 1 x 1.92 TB NVMe OS drive
  • 1 x 7.68 TB U.2 NVMe Drive
  • Dual port 10 Gb Ethernet
  • Single port 1 Gb BMC port

Hopper

DGX H100 Server

Announced March 22, 2022 [26] and planned for release in Q3 2022, [27] The DGX H100 is the 4th generation of DGX servers, built with 8 Hopper-based H100 accelerators, for a total of 32 PFLOPs of FP8 AI compute and 640 GB of HBM3 Memory, an upgrade over the DGX A100s HBM2 memory. This upgrade also increases VRAM bandwidth to 3 TB/s. [28] The DGX H100 increases the rackmount size to 8U to accommodate the 700W TDP of each H100 SXM card. The DGX H100 also has two 1.92 TB SSDs for Operating System storage, and 30.72 TB of Solid state storage for application data.

One more notable addition is the presence of two Nvidia Bluefield 3 DPUs, [29] and the upgrade to 400 Gb/s InfiniBand via Mellanox ConnectX-7 NICs, double the bandwidth of the DGX A100. The DGX H100 uses new 'Cedar Fever' cards, each with four ConnectX-7 400 GB/s controllers, and two cards per system. This gives the DGX H100 3.2 Tb/s of fabric bandwidth across Infiniband. [30]

The DGX H100 has two Xeon Platinum 8480C Scalable CPUs (Codenamed Sapphire Rapids) [31] and 2 Terabytes of System Memory. [32]

The DGX H100 was priced at £379,000 or ~$482,000 USD at release. [33]

DGX GH200

Announced May 2023, the DGX GH200 connects 32 Nvidia Hopper Superchips into a singular superchip, that consists totally of 256 H100 GPUs, 32 Grace Neoverse V2 72-core CPUs, 32 OSFT single-port ConnectX-7 VPI of with 400 Gb/s InfiniBand and 16 dual-port BlueField-3 VPI with 200 Gb/s of Mellanox . Nvidia DGX GH200 is designed to handle terabyte-class models for massive recommender systems, generative AI, and graph analytics, offering 19.5 TB of shared memory with linear scalability for giant AI models. [34]

DGX Helios

Announced May 2023, the DGX Helios supercomputer features 4 DGX GH200 systems. Each is interconnected with Nvidia Quantum-2 InfiniBand networking to supercharge data throughput for training large AI models. Helios includes 1,024 H100 GPUs.

Blackwell

DGX GB200

Announced March 2024, GB200 NVL72 connects 36 Grace Neoverse V2 72-core CPUs and 72 B100 GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVLink domain that acts as a single massive GPU . Nvidia DGX GB200 offers 13.5 TB HBM3e of shared memory with linear scalability for giant AI models, less than its predecessor DGX GH200.

DGX SuperPod

The DGX Superpod is a high performance turnkey supercomputer solution provided by Nvidia using DGX hardware. [35] This system combines DGX compute nodes with fast storage and high bandwidth networking to provide a solution to high demand machine learning workloads. The Selene Supercomputer, at the Argonne National Laboratory, is one example of a DGX SuperPod based system.

Selene, built from 280 DGX A100 nodes, ranked 5th on the Top500 list for most powerful supercomputers at the time of its completion, and has continued to remain high in performance. This same integration is available to any customer with minimal effort on their behalf, and the new Hopper based SuperPod can scale to 32 DGX H100 nodes, for a total of 256 H100 GPUs and 64 x86 CPUs. This gives the complete SuperPod 20 TB of HBM3 memory, 70.4 TB/s of bisection bandwidth, and up to 1 ExaFLOP of FP8 AI compute. [36] These SuperPods can then be further joined to create larger supercomputers.

Eos supercomputer, designed, built, and operated by Nvidia, [37] [38] [39] was constructed of 18 H100 based SuperPods, totaling 576 DGX H100 systems, 500 Quantum-2 InfiniBand switches, and 360 NVLink Switches, that allow Eos to deliver 18 EFLOPs of FP8 compute, and 9 EFLOPs of FP16 compute, making Eos the 5th fastest AI supercomputer in the world, according to TOP500 (November 2023 edition).

As Nvidia does not produce any storage devices or systems, Nvidia SuperPods rely on partners to provide high performance storage. Current storage partners for Nvidia Superpods are Dell EMC, DDN, HPE, IBM, NetApp, Pavilion Data, and VAST Data. [40]

Accelerators

Comparison of accelerators used in DGX: [41] [42] [43]

ModelArchitectureSocketFP32
CUDA
cores
FP64 cores
(excl. tensor)
Mixed
INT32/FP32
cores
INT32
cores
Boost
clock
Memory
clock
Memory
bus width
Memory
bandwidth
VRAMSingle
precision
(FP32)
Double
precision
(FP64)
INT8
(non-tensor)
INT8
dense tensor
INT32FP4
dense tensor
FP16FP16
dense tensor
bfloat16
dense tensor
TensorFloat-32
(TF32)
dense tensor
FP64
dense tensor
Interconnect
(NVLink)
GPUL1 CacheL2 CacheTDPDie sizeTransistor
count
Process
B200 Blackwell N/AN/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A4.5 POPSN/A9 PFLOPSN/A2.2 PFLOPS2.2 PFLOPS1.1 PFLOPS40 TFLOPS1.8 TB/secGB200N/AN/A1000 WN/A208 BTSMC 4NP
B100 Blackwell N/AN/AN/AN/AN/AN/A8 Gbit/s HBM3e8192-bit8 TB/sec192 GB HBM3eN/AN/AN/A3.5 POPSN/A7 PFLOPSN/A1.8 PFLOPS1.8 PFLOPS900 TFLOPS30 TFLOPS1.8 TB/secGB100N/AN/A700 WN/A208 BTSMC 4NP
H200 Hopper SXM516896460816896N/A1980 MHz6.3 Gbit/s HBM3e6144-bit4.8 TB/sec141 GB HBM3e67 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/AN/AN/A989 TFLOPSN/A900 GB/secGH10025344 KB (192 KB × 132)51200 KB700 W814 mm280 BTSMC 4N
H100 Hopper SXM516896460816896N/A1980 MHz5.2 Gbit/s HBM35120-bit3.35 TB/sec80 GB HBM367 TFLOPS34 TFLOPSN/A1.98 POPSN/AN/AN/A990 TFLOPS990 TFLOPS495 TFLOPS67 TFLOPS900 GB/secGH10025344 KB (192 KB × 132)51200 KB700 W814 mm280 BTSMC 4N
A100 80GB Ampere SXM4691234566912N/A1410 MHz3.2 Gbit/s HBM2e5120-bit1.52 TB/sec80 GB HBM2e19.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC 7N
A100 40GB Ampere SXM4691234566912N/A1410 MHz2.4 Gbit/s HBM25120-bit1.52 TB/sec40 GB HBM219.5 TFLOPS9.7 TFLOPSN/A624 TOPS19.5 TOPSN/A78 TFLOPS312 TFLOPS312 TFLOPS156 TFLOPS19.5 TFLOPS600 GB/secGA10020736 KB (192 KB × 108)40960 KB400 W826 mm254.2 BTSMC 7N
V100 32GB Volta SXM351202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec32 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB350 W815 mm221.1 BTSMC 12 nm FFN
V100 16GB Volta SXM251202560N/A51201530 MHz1.75 Gbit/s HBM24096-bit900 GB/sec16 GB HBM215.7 TFLOPS7.8 TFLOPS62 TOPSN/A15.7 TOPSN/A31.4 TFLOPS125 TFLOPSN/AN/AN/A300 GB/secGV10010240 KB (128 KB × 80)6144 KB300 W815 mm221.1 BTSMC 12 nm FFN
P100 Pascal SXM/SXM2N/A17923584N/A1480 MHz1.4 Gbit/s HBM24096-bit720 GB/sec16 GB HBM210.6 TFLOPS5.3 TFLOPSN/AN/AN/AN/A21.2 TFLOPSN/AN/AN/AN/A160 GB/secGP1001344 KB (24 KB × 56)4096 KB300 W610 mm215.3 BTSMC 16 nm FinFET+

See also

Related Research Articles

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

<span class="mw-page-title-main">Ohio Supercomputer Center</span> Supercomputer facility at Ohio State University

The Ohio Supercomputer Center (OSC) is a supercomputer facility located on the western end of the Ohio State University campus, just north of Columbus. Established in 1987, the OSC partners with Ohio universities, labs and industries, providing students and researchers with high performance computing, advanced cyberinfrastructure, research and computational science education services.

The Green500 is a biannual ranking of supercomputers, from the TOP500 list of supercomputers, in terms of energy efficiency. The list measures performance per watt using the TOP500 measure of high performance LINPACK benchmarks at double-precision floating-point format.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

<span class="mw-page-title-main">Tsubame (supercomputer)</span> Series of supercomputers

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

<span class="mw-page-title-main">ThinkStation</span> Line of professional workstations by Lenovo

ThinkStation is a brand of professional workstations from Lenovo announced in November 2007 and then released in January 2008. They are designed to be used for high-end computing and computer-aided design (CAD) tasks and primarily compete with other enterprise workstation lines, such as Dell's Precision, HP's Z line, Acer's Veriton K series, and Apple's Mac Pro line.

<span class="mw-page-title-main">Nvidia Tesla</span> Nvidias line of general purpose GPUs

Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.

<span class="mw-page-title-main">NVLink</span> High speed chip interconnect

NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses a proprietary high-speed signaling interconnect (NVHS).

<span class="mw-page-title-main">Volta (microarchitecture)</span> GPU microarchitecture by Nvidia

Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was Nvidia's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.

<span class="mw-page-title-main">High Bandwidth Memory</span> Type of memory used on processors that require high transfer rate memory

High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. It is used in conjunction with high-performance graphics accelerators, network devices, high-performance datacenter AI ASICs, as on-package cache in CPUs and on-package RAM in upcoming CPUs, and FPGAs and in some supercomputers. The first HBM memory chip was produced by SK Hynix in 2013, and the first devices to use HBM were the AMD Fiji GPUs in 2015.

<span class="mw-page-title-main">Summit (supercomputer)</span> Supercomputer developed by IBM

Summit or OLCF-4 is a supercomputer developed by IBM for use at Oak Ridge Leadership Computing Facility (OLCF), a facility at the Oak Ridge National Laboratory, capable of 200 petaFLOPS thus making it the 5th fastest supercomputer in the world after Frontier (OLCF-5), Fugaku, LUMI, and Leonardo, with Frontier being the fastest. It held the number 1 position from November 2018 to June 2020. Its current LINPACK benchmark is clocked at 148.6 petaFLOPS.

<span class="mw-page-title-main">Ampere (microarchitecture)</span> GPU microarchitecture by Nvidia

Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020 and is named after French mathematician and physicist André-Marie Ampère.

<span class="mw-page-title-main">Hopper (microarchitecture)</span> GPU microarchitecture designed by Nvidia

Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is parallel to Ada Lovelace. It's the latest generation of Nvidia Tesla.

Christofari — are Christofari (2019), Christofari Neo (2021) supercomputers of Sberbank based on Nvidia corporation hardware Sberbank of Russia and Nvidia. Their main purpose is neural network learning. They are also used for scientific research and commercial calculations.

Inspur Server Series is a series of server computers introduced in 1993 by Inspur, an information technology company, and later expanded to the international markets. The servers were likely among the first originally manufactured by a Chinese company. It is currently developed by Inspur Information and its San Francisco-based subsidiary company - Inspur Systems, both Inspur's spinoff companies. The product line includes GPU Servers, Rack-mounted servers, Open Computing Servers and Multi-node Servers.

Selene is a supercomputer developed by Nvidia, capable of achieving 63.460 petaflops, ranking as the fifth fastest supercomputer in the world, when it entered the list. Selene is based on the Nvidia DGX system consisting of AMD CPUs, Nvidia A100 GPUs, and Mellanox HDDR networking. Selene is based on the Nvidia DGX Superpod, which is a high performance turnkey supercomputer solution provided by Nvidia using DGX hardware. DGX Superpod is a tightly integrated system that combines high performance DGX compute nodes with fast storage and high bandwidth networking. It aims to provide a turnkey solution to high-demand machine learning workloads. Selene was built in three months and is the fastest industrial system in the US while being the second-most energy-efficient supercomputing system ever.

<span class="mw-page-title-main">Leonardo (supercomputer)</span> Supercomputer in Italy

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200 Gbit/s Nvidia Mellanox HDR InfiniBand connectivity. Inaugurated in November 2022, Leonardo is capable of 250 petaflops, making it one of the top five fastest supercomputers in the world. It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.

<span class="mw-page-title-main">SXM (socket)</span> High performance computing socket

SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form factor compatible SXM modules with P100 GPUs and later was unveiled to be capable of allowing upgrading to SXM2 modules with V100 GPUs.

<span class="mw-page-title-main">Taiwania 3</span> Supercomputer of Taiwan

Taiwania 3 is one of the supercomputers made by Taiwan, and also the newest one. It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after getting permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100 Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192 GB. There's currently two Intel Xeon Platinum 8280 2.4 GHz CPU inside each node. The full calculation capability is 2.7PFLOPS. It is launched into operation in November 2020 before schedule due to the needed for COVID-19. It is currently ranked number 227 on Top 500 list of June, 2021 and number 80 on Green 500 list. It is manufactured by Quanta Computer, Taiwan Fixed Network, and ASUS Cloud.

References

  1. "nvidia dgx-1" (PDF). Retrieved 15 November 2023.
  2. "inside pascal". 5 April 2016. Eight GPU hybrid cube mesh architecture with NVLink
  3. "NVIDIA Unveils the DGX-1 HPC Server: 8 Teslas, 3U, Q2 2016".
  4. "deep learning supercomputer". 5 April 2016.
  5. "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  6. "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  7. Volta architecture whitepaper nvidia.com
  8. Use Guide nvidia.com
  9. 1 2 Oh, Nate. "NVIDIA Ships First Volta-based DGX Systems". www.anandtech.com. Retrieved 24 March 2022.
  10. "CompecTA | NVIDIA DGX Station Deep Learning System". www.compecta.com. Retrieved 24 March 2022.
  11. "Intel® Xeon® Processor E5-2698 v4 (50M Cache, 2.20 GHz) - Product Specifications". Intel. Retrieved 19 August 2023.
  12. Supercomputer datasheet nvidia.com
  13. 1 2 "NVIDIA DGX Platform". NVIDIA. Retrieved 15 November 2023.
  14. "Nvidia launches the DGX-2 with two petaFLOPS of power". 28 March 2018.
  15. "NVIDIA DGX -2 for Complex AI Challenges". NVIDIA. Retrieved 24 March 2022.
  16. 1 2 Cutress, Ian. "NVIDIA's DGX-2: Sixteen Tesla V100s, 30 TB of NVMe, only $400K". www.anandtech.com. Retrieved 28 April 2022.
  17. "The NVIDIA DGX-2 is the world's first 2-petaflop single server supercomputer". www.hardwarezone.com.sg. Retrieved 24 March 2022.
  18. DGX2 User Guide nvidia.com
  19. "Product Specifications". www.intel.com. Retrieved 28 April 2022.
  20. "Product Specifications". www.intel.com. Retrieved 28 April 2022.
  21. 1 2 Ryan Smith (14 May 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  22. Tom Warren; James Vincent (14 May 2020). "Nvidia's first Ampere GPU is designed for data centers and AI, not your PC". The Verge.
  23. "Boston Labs welcomes the DGX A100 to our remote testing portfolio!". www.boston.co.uk. Retrieved 24 March 2022.
  24. Mayank Sharma (13 April 2021). "Nvidia will let you rent its mini supercomputers". TechRadar. Retrieved 31 March 2022.
  25. Jarred Walton (12 April 2021). "Nvidia Refreshes Expensive, Powerful DGX Station 320G and DGX Superpod". Tom's Hardware. Retrieved 28 April 2022.
  26. Newsroom, NVIDIA. "NVIDIA Announces DGX H100 Systems – World's Most Advanced Enterprise AI Infrastructure". NVIDIA Newsroom Newsroom. Retrieved 24 March 2022.
  27. Albert (24 March 2022). "NVIDIA H100: Overview, Specs, & Release Date | SeiMaxim". www.seimaxim.com. Retrieved 22 August 2022.
  28. Walton, Jarred (22 March 2022). "Nvidia Reveals Hopper H100 GPU With 80 Billion Transistors". Tom's Hardware. Retrieved 24 March 2022.
  29. Newsroom, NVIDIA. "NVIDIA Announces DGX H100 Systems – World's Most Advanced Enterprise AI Infrastructure". NVIDIA Newsroom Newsroom. Retrieved 19 April 2022.
  30. servethehome (14 April 2022). "NVIDIA Cedar Fever 1.6Tbps Modules Used in the DGX H100". ServeTheHome. Retrieved 19 April 2022.
  31. "NVIDIA DGX H100 Datasheet". www.nvidia.com. Retrieved 2 August 2023.
  32. "NVIDIA DGX H100". NVIDIA. Retrieved 24 March 2022.
  33. Every NVIDIA DGX benchmarked & power efficiency & value compared, including the latest DGX H100. , retrieved 1 March 2023
  34. "NVIDIA DGX GH200". NVIDIA. Retrieved 24 March 2022.
  35. "NVIDIA SuperPOD Datasheet". NVIDIA. Retrieved 15 November 2023.
  36. Jarred Walton (22 March 2022). "Nvidia Reveals Hopper H100 GPU With 80 Billion Transistors". Tom's Hardware. Retrieved 24 March 2022.
  37. Vincent, James (22 March 2022). "Nvidia reveals H100 GPU for AI and teases 'world's fastest AI supercomputer'". The Verge . Retrieved 16 May 2022.
  38. Mellor, Chris (31 March 2022). "Nvidia Eos AI supercomputer will need a monster storage system". Blocks and Files. Retrieved 21 May 2022.
  39. Comment, Sebastian Moss. "Nvidia announces Eos, "world's fastest AI supercomputer"". Data Center Dynamics. Retrieved 21 May 2022.
  40. Mellor, Chris (31 March 2022). "Nvidia Eos AI supercomputer will need a monster storage system". Blocks and Files. Retrieved 29 April 2022.
  41. Smith, Ryan (22 March 2022). "NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder". AnandTech.
  42. Smith, Ryan (14 May 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  43. "NVIDIA Tesla V100 tested: near unbelievable GPU power". TweakTown. 17 September 2017.