Massive AI inference performance for large models
94 GB HBM3 memory with 3.9 TB/s bandwidth
High compute across FP64, FP8, INT8 precision
Multi-Instance GPU (MIG) support for up to 7 instances
NVLink 600 GB/s + PCIe Gen5 for fast connectivity
Due to product availability and ongoing tariff volatility, pricing is subject to change without notice. Please reconfirm all prices and quotes at the time of purchase and shipment.