Transforming AI with ASUS Solutions Powered by the NVIDIA Blackwell Platform
Pioneering AI supercomputing at the 2024 OCP Global Summit
- AI innovations powered by NVIDIA Blackwell: ESC NM2-E1 with NVIDIA® GB200 NVL2 and ESC AI POD with NVIDIA GB200 NVL72 both on show
- AI optimization with NVIDIA MGX: ASUS highlighting 8000A-E13P, a 4U server supporting up to eight NVIDIA® H200 Tensor Core GPUs, optimized for AI
- Session on modular architecture: Join ASUS session on October 16, 10:20-10:35, on the Data Center Modular Hardware System (DC-MHS)
Taipei, Taiwan, October 16, 2024 — ASUS today announced it will participate in the 2024 OCP Global Summit, taking place from October 15-17 at the San Jose Convention Center, at booth #A31. With nearly 30 years of experience in the server industry, ASUS has a rich history of collaboration with cloud-service providers, beginning in 2008 with the design of server motherboards for leading cloud data centers. Building on this strong foundation in motherboard expertise, ASUS has developed a comprehensive lineup of AI servers that will be showcased at the summit, underscoring the company’s commitment to advancing AI and data center technologies.
Advancing AI with NVIDIA Blackwell
Driving AI innovation further, ASUS will showcase its solutions built with the NVIDIA® Blackwell platform, designed to meet generative AI use cases and usher in a new era of data center performance.
First, ASUS ESC AI POD — a heavyweight contender and star product — is a groundbreaking rack solution, featuring the NVIDIA GB200 NVL72 system, which combines 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs within a rack-scale NVIDIA NVLink domain to function as a single massive GPU. It offers both liquid-to-air and liquid-to-liquid cooling options for optimal efficiency from an individual ESC AI POD to the entire data center. ESC AI POD is engineered to accelerate large language model (LLM) inference, providing real-time performance for resource-intensive applications such as trillion-parameter language models. Naturally, this next-generation rack offering is supported by a raft of ASUS-exclusive software solutions, system-verification methodologies and remote-deployment strategies – accelerating AI development and both scale-up and scale-out for data centers of all sizes.
Second, ASUS ESC NM2-E1, a 2U server with the NVIDIA GB200 NVL2 platform, is purpose-built for generative AI and HPC, delivering high-bandwidth communication. Engineered to optimize the full NVIDIA software stack, it provides an exceptional platform for AI developers and researchers looking to drive innovation further.
Optimizing AI performance with NVIDIA MGX architecture and NVIDIA H200 Tensor Core GPUs
ASUS will also be showcasing ASUS 8000A-E13P, a 4U server supporting up to eight NVIDIA H200 Tensor Core GPUs. Fully compliant with the NVIDIA MGX® architecture, it is designed for rapid deployment in large-scale enterprise AI infrastructures. Configured with an NVIDIA 2-8-5 topology (CPU-GPU-DPU/NIC) and four high-bandwidth ConnectX-7 or BlueField-3SuperNICs, it enhances east-west traffic and overall system performance.
Join the ASUS 2024 OCP session on Data Center Modular Server Architecture
Don’t miss the 15-minute ASUS session at the 2024 OCP Global Summit on October 16 from 10:20 to 10:35, where ASUS will explore the future of modular server architecture with an introduction of the Data Center Modular Hardware System (DC-MHS) developed by OCP and using the NVIDIA MGX modular design. Discover how this innovative architecture is transforming server design, offering unprecedented flexibility, simplified maintenance and efficient scalability for modern data centers.