Major Components#
Major components for the DGX SuperPOD configuration are listed in Table 7. These are representative of the configuration and must be finalized based on actual design.
Count |
Component |
Recommended Model |
---|---|---|
Racks |
||
70 |
Rack (Legrand) |
NVIDPD13 |
Nodes |
||
127 |
DGX nodes |
NVIDIA DGX systems |
4 |
UFM appliance |
NVIDIA Unified Fabric Manager Appliance |
7 |
Management servers |
Intel based x86 2 × Socket, 24 core or greater, 384 GB RAM, OS (2x480GB M.2 or SATA/SAS SSD in RAID 1), NVME 7.68 TB (raw), 4x NDR VPI Ports, TPM |
Management Network |
||
4 |
In-band management |
NVIDIA SN5600 Spectrum-4 based 800GbE 2U Open Ethernet switch with Cumulus Linux Authentication, 64 OSFP ports and 1 SFP28 port, 2 power supplies (AC), x86 CPU, Secure-boot, standard depth, C2P airflow, Tool-less Rail Kit, 920-9N42F-00RI-7C0 |
2 |
In-band management |
NVIDIA SN2201 switch with Cumulus Linux, 48 RJ45 ports, P2C, 920-9N110-00F1-0C0 |
17 |
OOB management |
NVIDIA SN2201 switch with Cumulus Linux, 48 RJ45 ports, P2C, 920-9N110-00F1-0C0 |
Compute Fabric |
||
48 |
Fabric switches |
NVIDIA Quantum QM9700 switch, 920-9B210-00FN-0M0 |
Storage Fabric |
||
16 |
Fabric switches |
NVIDIA Quantum QM9700 switch, 920-9B210-00FN-0M0 |
PDUs |
||
192 |
Rack PDUs |
Raritan PX3-5878I2R-P1Q2R1A15D5 |
12 |
Rack PDUs |
Raritan PX3-5747V-V2 |
Associated cables and transceivers are listed in Table 8. All networking components are multi-mode fiber.
Count |
Component |
Connection |
Recommended Model |
---|---|---|---|
In-Band Ethernet Cables |
|||
68 |
Ethernet 800Gb/s (2x400Gb/sTwin-port OSFP, DR8 multimode, parallel, 8-channel transceiver |
Leaf and spine transceivers |
980-9I510-F4NS00 |
2 |
DR1 Splitter cable 1x 400Gb/s to 4x 100Gb/s |
Spine to SN2201 Leaf to NFS |
Off-the-shelf, POT EFALU-PA2S1Q-005M or similar |
4 |
100gb/s single mode 1-lane (DR1), QSFP28 optical transceiver |
Spine to SN2201 on Leaf Transceivers |
980-9I042-00C000 |
4 |
2x400GbE Twin-port OSFP 100-meter Single mode Ethernet transceiver |
Spine to SN2201 on Spine NFS Connectivity to 5600 |
980-9I30H-F4NM00 |
254 |
DGX System 400G QSFP112 Multimode Transceivers |
QSFP112 transceivers on DGX Systems, |
980-9I693-00NS00 |
134 |
MMF MPO12 APC to 2xMPO12 APC 10m |
DGX systems, Management nodes to leaf |
980-9I570-00N030 |
14 |
Ethernet (ETH) 400Gb/s, Single-port, OSFP, multimode parallel transceiver |
OSFP transceivers on SLURM and Management Nodes |
980-9I51S-F4NS00 |
Varies |
Varies |
Customer NFS Storage |
Varies |
8 |
NVIDIA passive Copper cable, IB twin port NDR, up to 800Gb/s, OSFP, 1.5m |
Leaf – Spine layer |
980-9IA0Q-00N01A |
4 |
Cat5e for UFM to Inband |
UFM to Inband |
Cat5e |
OOB Ethernet Cables |
|||
381 |
1 Gbps |
DGX systems |
Cat5e |
64 |
1 Gbps |
InfiniBand Switches |
Cat5e |
11 |
1 Gbps |
Management/UFM nodes |
Cat5e |
6 |
1 Gbps |
In-band Ethernet switches |
Cat5e |
2 |
1 Gbps |
UFM Back-to-Back |
Cat5e |
204 |
1 Gbps |
PDUs |
Cat5e |
34 |
100gb/s single mode 1-lane (DR1), QSFP28 optical transceiver |
Spine to out of band SN2201 |
980-9I042-00C000 |
10 |
2x400GbE Twin-port OSFP 100-meter Single mode Ethernet transceiver |
Spine to SN2201 on Spine |
980-9I30H-F4NM00 |
20 |
DR1 Splitter cable 1x 400Gb/s to 4x 100Gb/s |
Spine to SN2201 Leaf |
Off-the-shelf, POT EFALU-PA2S1Q-005M or similar |
Varies |
1 Gbps |
Storage |
Cat5e |
Compute InfiniBand Cabling |
|||
2044 |
NDR Fiber Cables¹, 400 Gbps |
DGX systems to leaf, leaf to spine, UFM to leaf ports |
980-9I570-00N030 |
1536 |
Switch 2x400G OSFP Finned- top Multimode Transceivers |
Leaf and spine transceivers |
980-9I510-00NS00 |
508 |
System 2x400G OSFP Flat-top Multimode Transceivers |
Transceivers in the DGX B200 systems |
980-9I51A-00NS00 |
4 |
UFM System 400G OSFP Multimode Transceivers |
UFM to leaf connections |
980-9I51S-00NS00 |
InfiniBand Storage Cables¹ ² |
|||
498 |
NDR Fiber Cables, 400 Gbps |
DGX systems to leaf, leaf to spine, UFM to leaf connections |
980-9I570-00N030 |
48 |
NDR AOC Cables, 2x 200 Gbps QSFP56-QSFP56 |
Storage |
980-9I117-00H030 |
4 |
UFM System 400G OSFP Multimode Transceivers |
UFM to leaf connections |
980-9I51S-00NS00 |
369 |
Switch 2x400G OSFP Finned- top Multimode Transceivers |
Leaf and spine transceivers |
980-9I510-00NS00 |
254 |
DGX System 400G QSFP112 Multimode Transceivers |
QSFP112 transceivers |
980-9I693-00NS00 |
4 |
HDR 400 Gbps to 2x200 Gbps AOC Cables |
Slurm management |
980-9I117-00H030 |
Varies |
Storage Cables, 400 Gbps to 2x200 Gbps AOC Cables |
Varies |
980-9I117-00H030 |
Ethernet Storage Cables¹ ² |
|||
514 |
MMF MPO12 APC to 2xMPO12 APC 10m |
DGX systems to leaf, leaf to spine, to SLURM nodes |
980-9I570-00N030 |
386 |
2x400GbE Twin-port OSFP 50-meter Multimode Ethernet transceiver |
Leaf and spine transceivers |
980-9I510-F4NS00 |
8 |
400Gbs Single-port OSFP, 400Gbs Multimode SR4 50m |
OSFP transceivers on SLURM Management Nodes |
980-9I51S-00NS00 |
254 |
400GbE Single-port, QSFP112 50-meter Multimode Ethernet transceiver |
QSFP112 transceivers on DGX Systems, |
980-9I693-F4NS00 |
Varies |
100gb/s single mode 1-lane (DR1), QSFP28 optical transceiver |
Leaf transceivers for Storage |
980-9I042-00C000 |
Varies |
800gb/s to 4x 100Gb/s splitter cable |
Leaf to Storage Cables |
Varies |
¹ Part number will depend on exact cable lengths needed based on data center requirements. |
|||
² Count and cable type required depend on specific storage selected. |