4U GPU Server GS-2U-GB2935
Extreme GPU Computing Platform
The 4U GPU Server GS-2U-GB2935 is a 4U dual-socket GPU server powered by AMD EPYC™ 7003 series processors. It supports up to eight dual-slot active or passive GPUs, including NVIDIA® A100 and AMD Instinct™ MI100, and is optimized for AI training, HPC, and enterprise workloads. With support for NVIDIA NVLink® and AMD Infinity Fabric™, it enables scalable GPU performance.
High-Speed Interconnect & Expansion
The server features up to 11 PCIe 4.0 slots, including 8 x PCIe Gen4 x16 for GPUs. It supports dual NVMe, dual M.2, and OCP 3.0 modules for flexible networking and storage. Storage options include up to 8 x 3.5" or 2.5" bays for NVMe/SATA/SAS drives, enabling high-throughput performance.
Optimized Cooling & Power Efficiency
The 4U GPU Server GS-2U-GB2935 features an optimized thermal layout with independent CPU and GPU airflow tunnels. It includes up to four 3000W 80 PLUS Titanium redundant power supplies, supporting 2+2, 2+1, or 1+1 configurations for uninterrupted and efficient operation.
Management & Security
- ASMB10-iKVM with ASPEED AST2600 for out-of-band remote management
- Integrated PFR FPGA as platform Root-of-Trust for firmware resiliency
- TPM 2.0 support for secure boot and hardware-level security
Target Applications
The 4U GPU Server GS-2U-GB2935 is purpose-built for AI training, scientific computing, and enterprise-grade inferencing. Its scalable GPU architecture, PCIe Gen4 expansion, and robust power and cooling design make it ideal for modern data centers and research institutions.
NTS AI Software Stacks
Purpose-Built for High-Performance AI Infrastructure
LaunchPad – Instant AI Productivity
LaunchPad is your fast track to AI innovation. Designed for immediate productivity, it comes with preloaded Conda environments including TensorFlow, PyTorch, and RAPIDS. Ideal for data scientists, research labs, and rapid proof-of-concept (POC) development, LaunchPad empowers users to start training AI models on Day One—no setup required.
FlexBox – Scalable, Hybrid AI Deployment
FlexBox delivers seamless scalability for modern AI workflows. Featuring OCI-compliant containers and the NVIDIA Container Toolkit, it’s built for ML Ops, CI/CD pipelines, and version-controlled deployments. Whether you're running on-prem, in the cloud, or across hybrid environments, FlexBox ensures consistent, portable, and efficient AI operations.
ForgeKit – Full Control & Compliance
ForgeKit is engineered for environments demanding maximum control, security, and compliance. With a minimal install—just the driver and CUDA—it’s perfect for air-gapped, regulated, or mission-critical deployments. ForgeKit is customizable by design, making it the go-to stack for enterprises prioritizing data sovereignty and infrastructure governance.