SuperServer R6I-GP-221412
High-Performance Architecture
The SuperServer R6I-GP-221412 is a 6U multi-processor SuperServer built for extreme compute workloads. Featuring 8x Socket E (LGA-4677) 4th Gen Intel® Xeon® Scalable processors, it supports up to 32TB DDR5-4800MHz ECC RDIMM across 128 DIMM slots, making it ideal for in-memory databases, virtualization, and scale-up HPC environments.
GPU & Expansion Capabilities
Designed for maximum expansion, the system supports up to 12 double-width GPUs and offers 24 PCIe 5.0 slots for accelerators and add-on cards. It includes 24x 2.5" hot-swap NVMe/SAS/SATA bays and 2x internal M.2 NVMe/SATA slots for high-speed storage.
Cooling & Power Efficiency
The system is cooled by 10 heavy-duty counter-rotating fans with optimal fan speed control and thermal monitoring. It includes 4x 2600W (2+2) redundant Titanium-level power supplies for high-efficiency and uninterrupted operation.
Management & Monitoring
- SuperCloud Composer®, Supermicro Server Manager (SSM), and SuperDoctor® 5 for orchestration and diagnostics.
- Supermicro Update Manager (SUM) and SuperServer Automation Assistant (SAA) for firmware and automation.
- TPM 2.0, Secure Boot, and Silicon Root of Trust for enterprise-grade security and compliance.
Ideal Use Cases
Perfect for national labs, enterprise data centers, and mission-critical AI and analytics workloads, the SuperServer R6I-GP-221412 delivers unmatched compute density, memory capacity, and I/O scalability.
NTS AI Software Stacks
Purpose-Built for High-Performance AI Infrastructure
LaunchPad – Instant AI Productivity
LaunchPad is your fast track to AI innovation. Designed for immediate productivity, it comes with preloaded Conda environments including TensorFlow, PyTorch, and RAPIDS. Ideal for data scientists, research labs, and rapid proof-of-concept (POC) development, LaunchPad empowers users to start training AI models on Day One—no setup required.
FlexBox – Scalable, Hybrid AI Deployment
FlexBox delivers seamless scalability for modern AI workflows. Featuring OCI-compliant containers and the NVIDIA Container Toolkit, it’s built for ML Ops, CI/CD pipelines, and version-controlled deployments. Whether you're running on-prem, in the cloud, or across hybrid environments, FlexBox ensures consistent, portable, and efficient AI operations.
ForgeKit – Full Control & Compliance
ForgeKit is engineered for environments demanding maximum control, security, and compliance. With a minimal install—just the driver and CUDA—it’s perfect for air-gapped, regulated, or mission-critical deployments. ForgeKit is customizable by design, making it the go-to stack for enterprises prioritizing data sovereignty and infrastructure governance.