| Rack-Level Specifications |
| GPUs | 72 NVIDIA B200 GPUs via NVIDIA GB200 Grace Blackwell Superchips |
| CPUs | 36 NVIDIA Grace CPUs via NVIDIA GB200 Grace Blackwell Superchips |
| GPU memory | Up to 13.4 TB HBM3e |
| System Memory | Up to 17 TB LPDDR5X |
| Storage | 144 E1.S PCIe 5.0 drive bays |
| Rack Configuration |
| Nodes | 18x 1U ARS-121GL-NBO |
| Networking | 9x NVLink Switch, 4-ports per compute tray connecting 72 GPUs to provide 1.8TB/s GPU-to-GPU interconnect |
| Power | 8x 1U 33kW (6x 5.5kW PSUs), total power 132kW |
| Liquid Cooling | 1x in-rack Supermicro 250kW capacity CDU with redundant PSU and dual hot-swap pumps |
| Enclosure |
| Rack Enclosure | 48U, 19”-width |
| Dimensions | W 600 x D 1068 x H 2236 |
| Networking |
| Compute Fabric | Up to 400Gb/s NVIDIA Quantum-2 InfiniBand or Spectrum-X Ethernet with NVIDIA ConnectX®-7 adapters or BlueField®-3 SuperNICs |
| In-Band Network | Up to 200Gb/s In-Band Management speeds with NVIDIA BlueField-3 DPUs |
| Out-of-band Network | 1G/10G Ethernet out-of-band management |
| Liquid Cooling |
| Liquid Cooling Options | 1x in-rack Supermicro 250kW capacity CDU with redundant PSU and dual hot-swap pumps 1.3MW capacity in-row CDU 180kW/240kW capacity liquid-to-air solutions for facilities without cooling tower and water supply |