|
- Supermicro's NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems are built on the DCBBS liquid-cooling stack, targeting up to 10x throughput per watt and one-tenth the token cost, compared to NVIDIA Blackwell solutions.
- Supermicro's 2U HGX Rubin NVL8 system is the most flexible platform supporting NVIDIA Vera and next-generation x86 CPUs, scaling to 72 Rubin GPUs per rack, as well as a DCBBS Liquid-to-Air (L2A) Sidecar CDU option for data centers without liquid-cooling.
- Supermicro's new NVIDIA Vera CPU systems include a 2U server supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs and a new AI storage system for context memory extension integrated with NVIDIA BlueField-4 DPU.
SAN JOSE, Calif., March 17, 2026 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud Computing, AI/ML, Storage, and 5G/Edge, today unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. As data centers transform into AI factories, producing intelligence at massive scale, agentic reasoning, long-context AI, and Mixture-of-Experts (MoE) workloads are driving demand for an entirely new class of compute and storage infrastructure. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers.
"We are entering a new era where every organization requires an AI factory to win in the marketplace, as the demand for inference workloads is reshaping what data center infrastructure must deliver," said Charles Liang, president and CEO of Supermicro. "Supermicro's DCBBS technology stack is being engineered to empower upcoming NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU systems to give our customers a fast, clear path to deploying next-generation AI factories, at scale. We are excited to provide an early look at these solutions as a testament to being first to market with the infrastructure that will power the next frontier of AI."
For more information visit NVIDIA Vera Rubin | Supermicro.
Supermicro DCBBS for NVIDIA Vera Rubin and Rubin Platforms
Delivering AI factory performance at scale requires much more than just compute — It demands power, cooling, and networking infrastructure that performs seamlessly. Supermicro's modular DCBBS approach enables data center operators to deploy validated, pre-engineered rack solutions rather than custom-building infrastructure for each project — reducing time-to-online, minimizing integration risk, and lowering total cost of ownership across AI factory deployments of any scale.
Supermicro's DCBBS are engineered specifically to meet the evolving thermal, power, and networking demands needed to enable rapid and robust deployment of upcoming NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin, and NVIDIA Vera CPU infrastructure. To meet the needs of Vera Rubin platforms, which will be fully liquid-cooled from this generation forward, DCBBS includes a full suite of validated liquid-cooling infrastructure. This expansion includes in-rack and in-row components such as coolant distribution units (CDUs), manifolds, and liquid-to-air sidecar. Also included are infrastructure solutions such as cooling towers and cabling design and implementation services — designed to integrate seamlessly with Supermicro's next-generation system portfolio.
Supermicro NVIDIA Vera Rubin NVL72 SuperCluster
Supermicro is engineering its NVIDIA Vera Rubin NVL72 with new DCBBS liquid-cooling components to fully support the power and thermal envelope at rack and cluster scale. This includes the manufacturing of optimized NVIDIA MGX racks, in-rack or in-row CDU, RDHx and L2A sidecar to streamline production and deployment of the rack-scale AI supercomputer at scale. The Vera Rubin NVL72 operates as a single rack-scale accelerator, unifying six co-designed chips — Rubin GPU, Vera CPU, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-X Ethernet — to deliver up to 3.6 Exaflops of inference, 75TB of fast memory, and 1.6 PB/s of HBM4 bandwidth, targeting up to 10x the throughput per watt and one-tenth the token cost compared to NVIDIA Blackwell.
NVIDIA HGX Rubin NVL8 System
The 2U HGX Rubin NVL8 system provides the densest and most flexible HGX platform — and the first HGX platform to offer greater flexibility in CPU selections including NVIDIA Vera CPUs alongside next-generation AMD and Intel x86 processors. Built on the NVIDIA MGX rack architecture with Supermicro's blind mate busbar and manifold for tool-free rack integration, it gives customers the freedom to pair eight Rubin GPUs with the CPU platform that best fits their workload and software stack.
The design supports 9 HGX Rubin NVL8 systems per rack — up to 72 Rubin GPUs total — for large-scale AI training, inference, and accelerated HPC. DCBBS provides in-rack CDU, in-row CDU, RDHx and an optional Liquid-to-air (L2A) sidecar for customers deploying in liquid-cooled or air-cooled data center environments.
NVIDIA Vera CPU System with RTX PRO
Supermicro's Vera CPU next-generation agentic AI system is being engineered as a versatile AI compute node for organizations targeting next-generation agentic AI deployments. The system features dual NVIDIA Vera CPUs supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs in a compact 2U chassis — delivering the compute density and energy efficiency that enterprise AI inference, agentic workloads, visualization, and adding accelerated computing to all enterprise workloads. It is a high-bandwidth LPDDR5X memory subsystem and PCIe GPU acceleration in a space-efficient footprint.
NVIDIA BlueField-4 STX Context Memory Storage Platform
Supermicro's upcoming Context Memory Storage Platform (CMX) introduces a new class of AI-native storage for context memory — architected as an intelligent pod-level context memory storage tier that extends GPU KV cache capacity and serves long-context inference data at the throughput that Vera Rubin NVL72 super pod clusters demand. Powered by NVIDIA BlueField-4 processor, NVIDIA Vera CPUs, NVIDIA ConnectX-9 SuperNICs, Spectrum-X Ethernet NVIDIA DOCA, and NVIDIA Dynamo, the system provides the high-bandwidth, low-latency fabric and intelligent data path offload that large-scale AI inference pipelines and RAG workloads require.
Supermicro NVIDIA Blackwell Solutions — Available Now
With next-generation systems in rapid development, Supermicro's current portfolio of NVIDIA Blackwell-based systems is in full production and available for immediate deployment through Supermicro's US and global manufacturing capacity, enabling customers to build and scale production AI infrastructure today. Supermicro is investing across both its current Blackwell lineup and its next-generation systems to ensure customers have the right platform at every stage of this transformation.
Visit Supermicro at GTC San Jose 2026
Supermicro will be unveiling early previews of its Vera Rubin platform systems alongside its current production Blackwell portfolio. Supermicro experts will be available to discuss current procurement options, roadmap planning, and deployment timelines for both near-term and next-generation AI infrastructure in Supermicro booth #1113.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
- Supermicro's NVIDIA Vera Rubin NVL72 and HGX Rubin NVL8 systems are built on the DCBBS liquid-cooling stack, targeting up to 10x throughput per watt and one-tenth the token cost, compared to NVIDIA Blackwell solutions.
- Supermicro's 2U HGX Rubin NVL8 system is the most flexible platform supporting NVIDIA Vera and next-generation x86 CPUs, scaling to 72 Rubin GPUs per rack, as well as a DCBBS Liquid-to-Air (L2A) Sidecar CDU option for data centers without liquid-cooling.
- Supermicro's new NVIDIA Vera CPU systems include a 2U server supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs and a new AI storage system for context memory extension integrated with NVIDIA BlueField-4 DPU.
SAN JOSE, Calif., March 17, 2026 /PRNewswire/ -- Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud Computing, AI/ML, Storage, and 5G/Edge, today unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. As data centers transform into AI factories, producing intelligence at massive scale, agentic reasoning, long-context AI, and Mixture-of-Experts (MoE) workloads are driving demand for an entirely new class of compute and storage infrastructure. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers.
"We are entering a new era where every organization requires an AI factory to win in the marketplace, as the demand for inference workloads is reshaping what data center infrastructure must deliver," said Charles Liang, president and CEO of Supermicro. "Supermicro's DCBBS technology stack is being engineered to empower upcoming NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU systems to give our customers a fast, clear path to deploying next-generation AI factories, at scale. We are excited to provide an early look at these solutions as a testament to being first to market with the infrastructure that will power the next frontier of AI."
For more information visit NVIDIA Vera Rubin | Supermicro.
Supermicro DCBBS for NVIDIA Vera Rubin and Rubin Platforms
Delivering AI factory performance at scale requires much more than just compute — It demands power, cooling, and networking infrastructure that performs seamlessly. Supermicro's modular DCBBS approach enables data center operators to deploy validated, pre-engineered rack solutions rather than custom-building infrastructure for each project — reducing time-to-online, minimizing integration risk, and lowering total cost of ownership across AI factory deployments of any scale.
Supermicro's DCBBS are engineered specifically to meet the evolving thermal, power, and networking demands needed to enable rapid and robust deployment of upcoming NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin, and NVIDIA Vera CPU infrastructure. To meet the needs of Vera Rubin platforms, which will be fully liquid-cooled from this generation forward, DCBBS includes a full suite of validated liquid-cooling infrastructure. This expansion includes in-rack and in-row components such as coolant distribution units (CDUs), manifolds, and liquid-to-air sidecar. Also included are infrastructure solutions such as cooling towers and cabling design and implementation services — designed to integrate seamlessly with Supermicro's next-generation system portfolio.
Supermicro NVIDIA Vera Rubin NVL72 SuperCluster
Supermicro is engineering its NVIDIA Vera Rubin NVL72 with new DCBBS liquid-cooling components to fully support the power and thermal envelope at rack and cluster scale. This includes the manufacturing of optimized NVIDIA MGX racks, in-rack or in-row CDU, RDHx and L2A sidecar to streamline production and deployment of the rack-scale AI supercomputer at scale. The Vera Rubin NVL72 operates as a single rack-scale accelerator, unifying six co-designed chips — Rubin GPU, Vera CPU, NVIDIA NVLink 6, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-X Ethernet — to deliver up to 3.6 Exaflops of inference, 75TB of fast memory, and 1.6 PB/s of HBM4 bandwidth, targeting up to 10x the throughput per watt and one-tenth the token cost compared to NVIDIA Blackwell.
NVIDIA HGX Rubin NVL8 System
The 2U HGX Rubin NVL8 system provides the densest and most flexible HGX platform — and the first HGX platform to offer greater flexibility in CPU selections including NVIDIA Vera CPUs alongside next-generation AMD and Intel x86 processors. Built on the NVIDIA MGX rack architecture with Supermicro's blind mate busbar and manifold for tool-free rack integration, it gives customers the freedom to pair eight Rubin GPUs with the CPU platform that best fits their workload and software stack.
The design supports 9 HGX Rubin NVL8 systems per rack — up to 72 Rubin GPUs total — for large-scale AI training, inference, and accelerated HPC. DCBBS provides in-rack CDU, in-row CDU, RDHx and an optional Liquid-to-air (L2A) sidecar for customers deploying in liquid-cooled or air-cooled data center environments.
NVIDIA Vera CPU System with RTX PRO
Supermicro's Vera CPU next-generation agentic AI system is being engineered as a versatile AI compute node for organizations targeting next-generation agentic AI deployments. The system features dual NVIDIA Vera CPUs supporting up to 6 RTX PRO 4500 Blackwell Server Edition GPUs in a compact 2U chassis — delivering the compute density and energy efficiency that enterprise AI inference, agentic workloads, visualization, and adding accelerated computing to all enterprise workloads. It is a high-bandwidth LPDDR5X memory subsystem and PCIe GPU acceleration in a space-efficient footprint.
NVIDIA BlueField-4 STX Context Memory Storage Platform
Supermicro's upcoming Context Memory Storage Platform (CMX) introduces a new class of AI-native storage for context memory — architected as an intelligent pod-level context memory storage tier that extends GPU KV cache capacity and serves long-context inference data at the throughput that Vera Rubin NVL72 super pod clusters demand. Powered by NVIDIA BlueField-4 processor, NVIDIA Vera CPUs, NVIDIA ConnectX-9 SuperNICs, Spectrum-X Ethernet NVIDIA DOCA, and NVIDIA Dynamo, the system provides the high-bandwidth, low-latency fabric and intelligent data path offload that large-scale AI inference pipelines and RAG workloads require.
Supermicro NVIDIA Blackwell Solutions — Available Now
With next-generation systems in rapid development, Supermicro's current portfolio of NVIDIA Blackwell-based systems is in full production and available for immediate deployment through Supermicro's US and global manufacturing capacity, enabling customers to build and scale production AI infrastructure today. Supermicro is investing across both its current Blackwell lineup and its next-generation systems to ensure customers have the right platform at every stage of this transformation.
Visit Supermicro at GTC San Jose 2026
Supermicro will be unveiling early previews of its Vera Rubin platform systems alongside its current production Blackwell portfolio. Supermicro experts will be available to discuss current procurement options, roadmap planning, and deployment timelines for both near-term and next-generation AI infrastructure in Supermicro booth #1113.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next-generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Asia, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **
Supermicro Reveals DCBBS® with New NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU Systems, Designed to Accelerate Customer Time-to-Market
NINGDE, China, May 7, 2026 /PRNewswire/ -- On April 29th, Contemporary Amperex Intelligent Technology (Shanghai) Limited (CAIT), CATL's skateboard chassis arm, has entered into a strategic partnership with Turkish automotive brand Togg to jointly develop chassis platform for its new B-segment vehicle family, marking the first overseas passenger vehicle project for the platform.
Under the agreement, CAIT will contribute its Bedrock Chassis technology and engineering expertise, while working closely with Togg to co-develop the platform for three models in Togg's new B-segment vehicle family. Developed in line with Togg's product strategy, user expectations and mobility ecosystem, the platform will support next-generation electric vehicles for the Turkish and European markets, with Togg playing a defining role in shaping the user experience, product requirements and digital architecture. The first model developed under the partnership is expected to enter mass production in 2027.
Battery-centric chassis architecture
The Bedrock Chassis is an integrated intelligent chassis built around a "battery-centric" architecture. It combines core chassis components including the battery, electric drive system, thermal management system and chassis domain controller into a single platform. This integration allows the chassis to manage both vehicle energy and motion control, effectively acting as a mobile energy carrier for the vehicle.
Robin Zeng, Chairman and CEO of CATL, said, "This collaboration represents another important milestone in the global expansion of the CATL Bedrock Chassis following its mass production rollout in the Chinese market. It will also serve as a benchmark project in the field of integrated intelligent chassis, strengthening our global partnerships, accelerating electrification and supporting the transition to low-carbon mobility in emerging new energy markets."
Commenting on the partnership, Togg Chairman Fuat Tosyalı said: "We see mobility not merely as a product category, but as a holistic matter of technology and ecosystem. In this direction, we are taking the partnerships we establish beyond conventional supplier relationships and turning them into strategic partnerships that create shared value and build the future together. Rather than adopting a ready-made solution, we are becoming part of the entire development process, responding more effectively to user needs while also contributing to the development of this ecosystem in our country. In the period ahead, through such value-creating partnerships, we will further enrich the Togg ecosystem and the experience we offer our users by developing new solutions across different segments."
Localised model for global markets
The Bedrock Chassis has been developed for global deployment through a "1+1+1" localisation model. This model combines one chassis technology platform with one industrial supply chain pathway and the localised operation of one domestic automotive brand. The aim is to allow electric vehicles to be designed and produced in ways that reflect the needs of local markets while using a common technological foundation.
The partnership with Togg is expected to apply this approach in Türkiye, supporting the development of vehicles tailored to regional consumer preferences while strengthening the local electric vehicle ecosystem.
Expanding international partnerships
In 2024, the Bedrock Chassis achieved mass production in the Chinese market, marking the world's first deployment of an integrated intelligent chassis offered as a standalone product to passenger vehicle brands.
CAIT is continuing to expand cooperation around the Bedrock Chassis in several regions, including Europe and Southeast Asia. The platform is designed to help emerging automotive markets build competitive electric vehicle industries more efficiently, while supporting the global shift towards low-emission mobility.
** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **
CATL Subsidiary CAIT Partners up with Togg on Bedrock Chassis