ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions
- Ultra-dense AI POD: Liquid- or air-cooled racks with NVIDIA GB200/300 NVL72, NVIDIA Quantum InfiniBand or Spectrum-X Ethernet for high-efficiency AI scaling
- Flexible architectures: NVIDIA MGX and NVIDIA HGX designs with Xeon 6 CPUs and NVIDIA ConnectX-8 SuperNICs for advanced AI and complex 3D
- Complete ecosystem: Real-time agentic AI, NVIDIA Omniverse integration, and high-speed storage for seamless deployment
TAIPEI, Taiwan, May 19, 2025 — ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale.
NVIDIA Enterprise AI Factory with ASUS AI POD
The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks.
Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications.
For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency.
AI-ready racks: Scalable power for LLMs and immersive workloads
ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing.
In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance.
Engineered for the AI Factory, enabling next-gen agentic AI
Integrated with NVIDIA’s agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries.
As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling.
ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation.