공지 • Mar 17
Super Micro Computer, Inc. Unveils DCBBS With New NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, And Vera CPU Systems
Super Micro Computer, Inc. unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. Supermicro's NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems are built on the DCBBS liquid-cooling stack, targeting up to 10x throughput per watt and one-tenth the token cost, compared to NVIDIA Blackwell solutions. Supermicro's 2U HGX Rubin NVL8 system is the most flexible platform supporting NVIDIA Vera and next-generation x86 CPUs, scaling to 72 Rubin GPUs per rack, as well as a DCBBS Liquid-to-Air (L2A) Sidecar CDU option for data centers without liquid-cooling. Supermicro's new NVIDIA Vera CPU systems include a 2U server supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs and a new AI storage system for context memory extension integrated with NVIDIA BlueField-4 DPU. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers. Supermicro's modular DCBBS approach enables data center operators to deploy validated, pre-engineered rack solutions rather than custom-building infrastructure for each project — reducing time-to-online, minimizing integration risk, and lowering total cost of ownership across AI factory deployments of any scale. Supermicro's DCBBS are engineered specifically to meet the evolving thermal, power, and networking demands needed to enable rapid and robust deployment of upcoming NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin, and NVIDIA Vera CPU infrastructure. To meet the needs of Vera Rubin platforms, which will be fully liquid-cooled from this generation forward, DCBBS includes a full suite of validated liquid-cooling infrastructure. This expansion includes in-rack and in-row components such as coolant distribution units (CDUs), manifolds, and liquid-to-air sidecar. Also included are infrastructure solutions such as cooling towers and cabling design and implementation services — designed to integrate seamlessly with Supermicro's next-generation system portfolio. Supermicro is engineering its NVIDIA Vera Rubin NVL72 with new DCBBS liquid-cooling components to fully support the power and thermal envelope at rack and cluster scale. This includes the manufacturing of optimized NVIDIA MGX racks, in-rack or in-row CDU, RDHx and L2A sidecar to streamline production and deployment of the rack-scale AI supercomputer at scale. The Vera Rubin NVL72 operates as a single rack-scale accelerator, unifying six co-designed chips — Rubin GPU, Vera CPU, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-X Ethernet — to deliver up to 3.6 Exaflops of inference, 75TB of fast memory, and 1.6 PB/s of HBM4 bandwidth, targeting up to 10x the throughput per watt and one-tenth the token cost compared to NVIDIA Blackwell. The 2U HGX Rubin NVL8 system provides the densest and most flexible HGX platform — and the first HGX platform to offer greater flexibility in CPU selections including NVIDIA Vera CPUs alongside next-generation AMD and Intel x86 processors. Built on the NVIDIA MGX rack architecture with Supermicro's blind mate busbar and manifold for tool-free rack integration, it gives customers the freedom to pair eight Rubin GPUs with the CPU platform that best fits their workload and software stack. The design supports 9 HGX Rubin NVL8 systems per rack — up to 72 Rubin GPUs total — for large-scale AI training, inference, and accelerated HPC. DCBBS provides in-rack CDU, in-row CDU, RDHx and an optional Liquid-to-air (L2A) sidecar for customers deploying in liquid-cooled or air-cooled data center environments. Supermicro's Vera CPU next-generation agentic AI system is being engineered as a versatile AI compute node for organizations targeting next-generation agentic AI deployments. The system features dual NVIDIA Vera CPUs supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs in a compact 2U chassis — delivering the compute density and energy efficiency that enterprise AI inference, agentic workloads, visualization, and adding accelerated computing to all enterprise workloads. It is a high-bandwidth LPDDR5X memory subsystem and PCIe GPU acceleration in a space-efficient footprint. Supermicro's upcoming Context Memory Storage Platform (CMX) introduces a new class of AI-native storage for context memory — architected as an intelligent pod-level context memory storage tier that extends GPU KV cache capacity and serves long-context inference data at the throughput that Vera Rubin NVL72 super pod clusters demand. Powered by NVIDIA BlueField-4 processor, NVIDIA Vera CPUs, NVIDIA ConnectX-9 SuperNICs, Spectrum-X Ethernet NVIDIA DOCA, and NVIDIA Dynamo, the system provides the high-bandwidth, low-latency fabric and intelligent data path offload that large-scale AI inference pipelines and RAG workloads require. Supermicro's current portfolio of NVIDIA Blackwell-based systems is in full production and available for immediate deployment through Supermicro's US and global manufacturing capacity, enabling customers to build and scale production AI infrastructure.