4U GPU Server GS-4U-AS8003
Extreme GPU Computing Platform
The 4U GPU Server GS-4U-AS8003 is a 4U dual-socket GPU server powered by 5th Gen Intel® Xeon® Scalable processors. It supports up to eight dual-slot active or passive GPUs and is designed for AI training, HPC, and enterprise workloads. With PCIe 5.0 Switch Solution and support for NVIDIA NVLink® and BlueField DPUs, it delivers scalable, high-bandwidth performance.
High-Speed Interconnect & Expansion
The server features 13 PCIe 5.0 slots, including 8 x PCIe Gen5 x16 (FHFL) for GPUs, 4 x PCIe Gen5 x16 for NICs, and 1 x PCIe Gen5 x16 for OCP 3.0. It supports up to 32 DDR5 DIMMs and includes 8 front-accessible bays for Tri-Mode NVMe/SATA/SAS drives, enabling high-speed storage and flexible system upgrades.
Optimized Cooling & Power Efficiency
The ESC8000-E11P features independent CPU and GPU airflow tunnels for thermal optimization. It supports up to four 3000W 80 PLUS Titanium redundant power supplies in 2+2 or 2+1 configurations, ensuring uninterrupted operation and high energy efficiency.
Management & Security
- ASMB11-iKVM with ASPEED AST2600 for remote management
- Control Center for centralized IT infrastructure control
- Hardware-level Root-of-Trust and TPM 2.0 for secure boot and firmware integrity
Target Applications
The 4U GPU Server GS-4U-AS8003 is purpose-built for enterprise AI infrastructure, scientific computing, and high-throughput inferencing. Its scalable GPU architecture, PCIe 5.0 switching, and robust power and cooling design make it ideal for modern data centers and research institutions.
NTS AI Software Stacks
Purpose-Built for High-Performance AI Infrastructure
LaunchPad – Instant AI Productivity
LaunchPad is your fast track to AI innovation. Designed for immediate productivity, it comes with preloaded Conda environments including TensorFlow, PyTorch, and RAPIDS. Ideal for data scientists, research labs, and rapid proof-of-concept (POC) development, LaunchPad empowers users to start training AI models on Day One—no setup required.
FlexBox – Scalable, Hybrid AI Deployment
FlexBox delivers seamless scalability for modern AI workflows. Featuring OCI-compliant containers and the NVIDIA Container Toolkit, it’s built for ML Ops, CI/CD pipelines, and version-controlled deployments. Whether you're running on-prem, in the cloud, or across hybrid environments, FlexBox ensures consistent, portable, and efficient AI operations.
ForgeKit – Full Control & Compliance
ForgeKit is engineered for environments demanding maximum control, security, and compliance. With a minimal install—just the driver and CUDA—it’s perfect for air-gapped, regulated, or mission-critical deployments. ForgeKit is customizable by design, making it the go-to stack for enterprises prioritizing data sovereignty and infrastructure governance.