GPU Server R2I-GP-252142
High-Performance AI & HPC Platform
The GPU Server R2I-GP-252142 is a 2U single-processor GPU server designed for AI training, inference, and high-performance computing. It supports Ampere® Altra® Max or Altra® processors built on 7nm technology, delivering high core counts and energy efficiency for modern data center workloads.
High-Speed Interconnect & Expansion
The server supports up to 2 x NVIDIA® H100 PCIe Gen4 GPU cards and 2 x NVIDIA® BlueField-2 DPUs. It features 2 x FHFL PCIe Gen4 x16 slots for GPUs and 2 x LP PCIe Gen4 x16 slots on the rear side for DPUs. With 8-channel DDR4 RDIMM/LRDIMM and 16 DIMM slots, it provides ample memory bandwidth for compute-intensive tasks.
Optimized Storage & Connectivity
The GPU Server R2I-GP-252142 includes 4 x 2.5" Gen4 NVMe hot-swappable bays and 2 x M.2 slots with PCIe Gen4 x4 interface for fast storage access. Networking is supported by 2 x 1GbE LAN ports (Intel® I350-AM2) and 1 x dedicated management port for remote access and monitoring.
Management & Security
- Dedicated management port for out-of-band management
- Hardware-level root of trust support for secure boot and firmware integrity
- Supports remote monitoring and diagnostics via GIGABYTE Management Console
Target Applications
The GPU Server R2I-GP-252142 is ideal for AI model training, edge computing, and cloud-native applications. Its compact 2U form factor and support for Ampere® Altra® processors make it a versatile solution for modern data centers and AI infrastructure deployments.
NTS AI Software Stacks
Purpose-Built for High-Performance AI Infrastructure
LaunchPad – Instant AI Productivity
LaunchPad is your fast track to AI innovation. Designed for immediate productivity, it comes with preloaded Conda environments including TensorFlow, PyTorch, and RAPIDS. Ideal for data scientists, research labs, and rapid proof-of-concept (POC) development, LaunchPad empowers users to start training AI models on Day One—no setup required.
FlexBox – Scalable, Hybrid AI Deployment
FlexBox delivers seamless scalability for modern AI workflows. Featuring OCI-compliant containers and the NVIDIA Container Toolkit, it’s built for ML Ops, CI/CD pipelines, and version-controlled deployments. Whether you're running on-prem, in the cloud, or across hybrid environments, FlexBox ensures consistent, portable, and efficient AI operations.
ForgeKit – Full Control & Compliance
ForgeKit is engineered for environments demanding maximum control, security, and compliance. With a minimal install—just the driver and CUDA—it’s perfect for air-gapped, regulated, or mission-critical deployments. ForgeKit is customizable by design, making it the go-to stack for enterprises prioritizing data sovereignty and infrastructure governance.