Delivering trusted AI with total flexibility, from rack-scale AI factories to edge and enterprise deployment
TAIPEI, March 17, 2026 /PRNewswire/ -- ASUS today unveiled its fully liquid-cooled AI infrastructure at NVIDIA GTC 2026 (Booth# 421), delivering a comprehensive, end-to-end solution powered by the NVIDIA Vera Rubin platform. Under the theme Trusted AI, Total Flexibility, this customizable framework — from rack-scale AI Factories, desktop AI supercomputing, Edge AI to Enterprise AI solutions — enables enterprises and cloud providers to build high-performance, energy-efficient large-scale AI clusters with unmatched efficiency and dramatically reduced PUE and TCO.
As a provider of NVIDIA GB300 NVL72 and NVIDIA HGX B300 systems, the flagship ASUS offering is the ASUS AI POD built on the NVIDIA Vera Rubin platform — a liquid-cooled, rack-scale powerhouse designed for massive AI workloads. Through strategic partnerships with leading cooling and component providers, ASUS offers diverse cooling modalities, tailored thermal solutions, and redundancy to meet any enterprise requirement. Proven by global client successes, ASUS provides expert consultation, a broad portfolio of AI and storage solutions, seamless infrastructure deployment, application integration, and ongoing services — combining scalability, and sustainability to drive business value and intelligence.
From infrastructure to implementation: The ASUS AI Factory in action
At the forefront is the flagship XA VR721-E3 built on NVIDIA Vera Rubin NVL72, a 100% liquid-cooled rack-scale system. This offers a TDP of up to 227kW (MaxP) or 187kW (MaxQ), delivers up to 10X higher performance per watt, and is purpose-built for trillion-parameter models and delivering massive AI performance for large-scale AI factories. Partnering with Vertiv, a global leader in critical digital infrastructure, Schneider Electric and other leading providers, ASUS delivers a full-stack power and cooling infrastructure designed for zero-throttle performance from standard deployments to advanced liquid cooling, ensuring redundancy for each specific needs.
Addressing rigorous data-center demands, ASUS also introduces its latest server series built on NVIDIA HGX Rubin NVL8 systems, featuring eight NVIDIA Rubin GPUs connected via sixth-generation NVIDIA NVLink with integrated 800G bandwidth per GPU. To facilitate a seamless and cost-effective transition to liquid cooling, ASUS offers two distinct solutions: the XA NR1I-E12L, an innovative hybrid-cooled option; and the XA NR1I-E12LR, a 100% liquid-cooled system. The hybrid-cooled XA NR1I-E12L specifically combines direct-to-chip (D2C) liquid cooling for the NVIDIA HGX Rubin NVL8 baseboard with air cooling for the dual Intel® Xeon® 6 processors.
The portfolio is further strengthened by high-performance scalable servers like the XA NB3I-E12 built on NVIDIA HGX B300 systems to ensure a solution for every demanding AI workload, the ESC8000A-E13X based on NVIDIA MGX integrated with NVIDIA ConnectX-8 SuperNICs for extreme GPU to GPU connectivity and ESC8000A-E13P accelerated by NVIDIA RTX PRO 4500 Blackwell Server Edition or NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, delivering breakthrough performance for demanding data processing, AI, video, and visual computing workloads in a power efficient design.
The tangible impact of the complete ASUS AI Factory concept is already demonstrated through several successful customer deployments, where ASUS ESC8000 series powered a production-line digital twin built on NVIDIA Omniverse libraries and integrated with NVIDIA's customizable multi-camera tracking workflow, enabling remote simulation and significantly reducing deployment risks, managing the entire process for seamless, low-disruption deployment and maximizes value from day one.
To support these powerful systems and democratize AI development, ASUS also has established a robust data ecosystem by partnering with NVIDIA-Certified storage providers — including IBM, DDN, WEKA and VAST Data — to deliver scalable, resilient solutions for memory-intensive AI. A full spectrum of storage solutions across block storage-VS320D-RS12, JBOD-VS320D-RS12J, object storage-OJ340A-RS60, and software-defined systems — ensuring flexibility from edge to cloud, and from enterprise applications to AI and HPC workloads.
Realizing physical AI: Full-stack edge AI supercomputing from development to deployment
As the domain expert in full-stack edge AI, ASUS establishes a complete ecosystem for physical AI, delivering the critical compute power required from initial development to final deployment. The journey begins at the developer's desk with ASUS ExpertCenter Pro ET900N G3, a deskside supercomputer powered by the NVIDIA Grace Blackwell Ultra platform. Featuring NVIDIA NVLink-C2C interconnects and 748GB of coherent unified memory, it handles the heavy lifting of training massive models. Alongside it, the ultrasmall ASUS Ascent GX10 offers agile petaflop-scale performance powered by NVIDIA Grace Blackwell Superchip, ideal for rapid model iteration and scalable edge setups.
This development prowess seamlessly transitions to the PE3000N, a ruggedized inference engine powered by NVIDIA Jetson Thor. Delivering a staggering 2,070 TFLOPS, the PE3000N provides the real-time compute needed for sensor fusion and autonomous navigation. Together, these systems form a unified workflow where open models such as NVIDIA Cosmos and vision AI libraries from Metropolis can effectively perceive, reason, and act in the physical world.
Secure and scalable agentic AI development
To further enhance these capabilities, ASUS Ascent GX10 and ASUS ExpertCenter Pro ET900N G3 empower agentic AI development with NVIDIA NemoClaw. This integration establishes an agent-ready platform for developers to build safe, long-running autonomous agents locally. By leveraging isolated sandbox environments, governed access control, and private on-device inference, ensuring safe and scalable agent workflows for the most demanding enterprise AI applications.
Enterprise AI: ASUS AI Hub with real-time business intelligence
To accelerate enterprise AI, ASUS presents the ASUS AI Hub, a turnkey on-premises AI platform optimized with ESC8000-series servers and powered by open-source LLMs such as NVIDIA Nemotron, and Gemma, enabling enterprises to build custom AI assistants, implement RAG-enhanced document intelligence, and maintain full data sovereignty for security and compliance.
Proven internally across over 10,000 employees with peak loads exceeding 600 requests per hour, >80% OCR accuracy, and >30% efficiency gains, the platform features domain-specific modules for diverse applications — including the newly-developed ASUS agentic internal business-intelligence platform — that allow senior leaders to instantly access critical insights on costs, sales, gross margins, factory operations, and other key metrics through simple natural-language Q&A, transforming complex data into immediate, actionable executive decision-making power.
ASUS and NVIDIA are also working together on NVIDIA NemoClaw — an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. It installs the NVIDIA OpenShell runtime—a secure environment for running autonomous agents, and open source models like NVIDIA Nemotron.
Green computing and sustainability at the core
Sustainability is a foundational pillar of the ASUS design philosophy, with green computing innovations integrated across both hardware and software to minimize TCO and environmental impact. On the hardware level, ASUS servers feature Thermal Radar 2.0, which uses up to 56 sensors to intelligently optimize fan performance, cutting power consumption by up to 36% and saving approximately $29,000 annually in a 1,000-node cluster. This commitment extends to software with the ASUS Control Center (ACC) Data Center Edition, a unified management platform that enhances security and includes automated carbon emissions tracking, providing enterprises with the tools needed to achieve their critical ESG goals.
ASUS, the AI supercomputing domain expert, provides comprehensive solutions and services, from consultation and deployment to user training and seamless integration via OpenAI-compatible APIs. As enterprises navigate the AI era, ASUS flexibly offers a cost-effective, secure, powerful and sustainable pathway to innovation and intelligent management.
AVAILABILITY & PRICING
ASUS servers are available worldwide. However, the availability of certain other ASUS products is subject to local regulatory requirements. For specific product availability and offerings in your region, please contact your local ASUS representative.
** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **
ASUS Unveils Game-Changing Liquid-Cooled AI Infrastructure Powered by NVIDIA Vera Rubin Platform
