2U GPU-Server GS-2U-GB2934

HPC/AI Server - 5th/4th Gen Intel Xeon® Scalable - 2U DP 4 x PCIe Gen5 GPUs
  • Supports up to 4 x Dual slot Gen5 GPUs
  • Dual 5th/4th Gen Intel® Xeon® Scalable Processors
  • 8-Channel DDR5 RDIMM, 24 x DIMMs
  • Dual ROM Architecture
  • 2 x 10Gb/s LAN ports via Intel® X710-AT2
  • 4 x 2.5" Gen5 NVMe/SATA/SAS hot-swap bays
  • 4 x 2.5" SATA/SAS hot-swap bays
  • 4 x FHFL PCIe Gen5 x16 or 8 x FHFL PCIe Gen5 x8 slots for GPUs
  • 2 x LP PCIe Gen5 x16 slots for add-in cards
  • Dual 3000W 80 PLUS Titanium redundant power supply

     

Configure & Buy Specifications

                    2U GPU-Server GS-2U-GB2934

 

Extreme GPU Computing Platform

The 2U GPU-Server GS-2U-GB2934 is a 2U dual-socket GPU server designed for AI training, inference, and HPC workloads. It supports up to 4 dual-slot PCIe Gen5 GPUs and is powered by dual 5th or 4th Gen Intel® Xeon® Scalable processors. With 8-channel DDR5 memory per CPU and up to 24 DIMM slots, it delivers high memory bandwidth and compute density.

 

High-Speed Interconnect & Expansion

The server includes 4 x FHFL PCIe Gen5 x16 or 8 x FHFL Gen5 x8 slots for GPUs, and 2 x LP PCIe Gen5 x16 slots for add-in cards. Storage options include 4 x 2.5" Gen5 NVMe/SATA/SAS and 4 x 2.5" SATA/SAS hot-swap bays. It also supports Intel® VROC and optional RAID cards for flexible storage configurations.

 

Optimized Cooling & Power Efficiency

The 2U GPU-Server GS-2U-GB2934 features a high-efficiency thermal design and includes dual 3000W 80 PLUS Titanium redundant power supplies. It ensures stable operation under full GPU load and supports both 100–240V AC and 240V DC input.

 

Management & Security

  • Integrated ASPEED AST2600 BMC with Redfish support
  • Dual ROM architecture for firmware redundancy
  • TPM 2.0 header with SPI interface for secure boot and hardware root of trust

 

Target Applications

The 2U GPU-Server GS-2U-GB2934 is ideal for AI training, scientific computing, and enterprise-grade inferencing. Its compact 2U form factor, PCIe Gen5 GPU support, and robust power and cooling make it a top choice for high-density data centers and research institutions.

 

NTS AI Software Stacks

Purpose-Built for High-Performance AI Infrastructure

 

LaunchPad – Instant AI Productivity

LaunchPad is your fast track to AI innovation. Designed for immediate productivity, it comes with preloaded Conda environments including TensorFlow, PyTorch, and RAPIDS. Ideal for data scientists, research labs, and rapid proof-of-concept (POC) development, LaunchPad empowers users to start training AI models on Day One—no setup required.

 

FlexBox – Scalable, Hybrid AI Deployment

FlexBox delivers seamless scalability for modern AI workflows. Featuring OCI-compliant containers and the NVIDIA Container Toolkit, it’s built for ML Ops, CI/CD pipelines, and version-controlled deployments. Whether you're running on-prem, in the cloud, or across hybrid environments, FlexBox ensures consistent, portable, and efficient AI operations.

 

ForgeKit – Full Control & Compliance

ForgeKit is engineered for environments demanding maximum control, security, and compliance. With a minimal install—just the driver and CUDA—it’s perfect for air-gapped, regulated, or mission-critical deployments. ForgeKit is customizable by design, making it the go-to stack for enterprises prioritizing data sovereignty and infrastructure governance.

Barebone
Processor
Memory
HARD DRIVE
(Max Quantity: 8)
GPU
(Max Quantity: 4)
GPU Bridge
(Max Quantity: 2)
CONTROLLER CARD
Battery Backup
NETWORK ADAPTER
(Max Quantity: 2)
Trusted Platform Module
OPERATING SYSTEM
NTS AI Stack See what's inside each NTS AI package
Software
NVIDIA Software
Warranty
$0.00