Skip to Content Facebook Feature Image

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

Business

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection
Business

Business

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

2026-03-18 00:00 Last Updated At:00:25

SAN JOSE, Calif., March 18, 2026 /PRNewswire/ -- MSI, a global leader in high-performance server solutions and Edge AI, unveils its comprehensive AI ecosystem at NVIDIA GTC 2026. In addition to launching the servers based on NVIDIA MGX architecture and powered by the NVIDIA Blackwell GPUs, MSI introduces the XpertStation WS300, built on NVIDIA DGX Station architecture. MSI also showcases the OmniGuard smart patrol vehicle, integrated with NVIDIA Alpamayo-R1 Vision-Language-Action (VLA) inference model, demonstrating a complete workflow from AI infrastructure and Digital Twin validation to real-world deployment.

High-Performance Foundations: MSI servers based on NVIDIA MGX

Based on the modular design of NVIDIA MGX architecture, MSI has engineered a robust portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. These platforms are optimized to accelerate a wide range of AI workloads, from data center deployments to edge applications. This flexible architecture empowers enterprises to tailor CPU, memory, and networking configurations to meet specific performance, scalability, and workload demands.

  • CG480-S5063: A flagship 4U server based on NVIDIA MGX, featuring dual Intel® Xeon® 6 processors, eight dual-width PCIe GPU slots, and 32 DDR5 DIMM slots for exceptional memory scalability. It supports up to 20 PCIe Gen5 E1.S NVMe drives for ultra-high data throughput.

  • CG481-S6053: Powered by dual AMD EPYC™ 9005 Series processors to maximize core density and I/O bandwidth. It integrates eight PCIe 5.0 GPU slots, 24 DDR5 DIMM slots and eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs, designed for compute-intensive AI clusters and HPC simulations.

  • CG681-S6093: The 6U liquid-cooled AI platform, designed with a dual-socket architecture and equipped with eight dual-width PCIe GPUs, integrates eight 400G Ethernet ports via NVIDIA ConnectX-8 SuperNICs to deliver exceptional performance and efficiency for high-density AI data center deployments.

Advanced Thermal Engineering and Future-Ready AI Innovation

MSI's platforms based on NVIDIA MGX integrate liquid cooling and optimized air-cooling designs to sustain Peak Performance under the most rigorous AI workloads. Leveraging exceptional GPU throughput and high-speed connectivity, MSI is demonstrating AI-powered intelligent video search and automated summarization. These capabilities optimize multi-camera analytics for smart cities and industrial inspection, empowering enterprises to rapidly extract actionable intelligence from massive datasets to enhance decision-making efficiency.

To reinforce its commitment to next-generation AI computing, MSI is also showcasing NVIDIA Vera CPU option, highlighting its ongoing innovation in data-driven, AI-powered infrastructure solutions.

MSI XpertStation WS300: Data Center-Class AI Power at the Desk

For researchers requiring massive compute power in a deskside form factor, MSI announced the XpertStation WS300, available starting March 16.

  • Core Architecture: Built on NVIDIA DGX Station architecture, the WS300 is powered by NVIDIA Grace Blackwell Ultra Desktop Superchip and features 748GB of coherent memory.

  • Connectivity and Power: Equipped with dual 400GbE ports via NVIDIA ConnectX-8 and a 1600W ATX power supply, the system delivers unprecedented AI acceleration directly at the desktop with a plug-and-play supercomputing feature.

"The growth of Generative AI and LLMs has driven extreme demand for underlying infrastructure," said Danny Hsu, General Manager of Enterprise Platform Solutions at MSI. "Through our platforms based on NVIDIA MGX and XpertStation WS300, MSI is extending data center-level momentum to the developer's desk, accelerating innovation across the data center, the edge, and the desktop."

Real-World Impact: Reducing Deployment Risk with Digital Twins & EdgeXpert

MSI utilizes NVIDIA Omniverse libraries and NVIDIA Isaac Sim open simulation framework to create high-precision virtual environments to eliminate uncertainties in physical deployments. Through Sim2Real (Simulation to Reality) technology, OmniGuard patrol vehicle underwent rigorous virtual testing of patrol routes and pedestrian interactions to ensure functional validation before real-world implementation.

  • Core Intelligence: NVIDIA Alpamayo-R1 Autonomous Decision System

OmniGuard deeply integrates the Alpamayo, granting the vehicle "thought-and-action" perception to master complex Long-tail scenarios. Testing confirms a 12% increase in navigation planning accuracy and a 35% reduction in off-road rates.

  • Scalable Edge AI Implementation:

Models are deployed and monitored via the MSI EdgeXpert platform. The vehicle powered by the NVIDIA Jetson platform for real-time edge processing, ensuring reliable performance in dynamic environments. This architecture can be rapidly extended to smart factories, logistics parks, and public infrastructure.

David Wu, General Manager of Customized Product Solutions at MSI, commented: "The value of AI lies in solving real-world pain points. Powered by NVIDIA accelerated computing, Digital Twins, and Alpamayo inference technology, MSI has established a seamless workflow from virtual validation to physical deployment. This not only shortens development cycles but also demonstrates MSI's strength in driving the comprehensive implementation of Edge AI applications."

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

GTC 2026: MSI Drives End-to-End AI Implementation by Bridging Cloud Computing and Autonomous Edge Inspection

Empowering Enterprises to Deploy Megawatt-Scale AI Data Centers based on NVIDIA MGX

News Summary

LITEON Technology will showcase next-gen AI data center solutions at NVIDIA GTC 2026, including solutions for the NVIDIA Vera Rubin platform and 800 VDC power rack architecture, 110 kW Power Shelf, liquid cooling systems and racks based on NVIDIA MGX, and 2.1 MW in-row CDU. LITEON's 800 VDC solution integrates high-efficiency power modules, DC power distribution, and system-level energy management to meet the dynamic load demands of AI servers, while enhancing liquid-cooling and thermal management flexibility and improving overall operational efficiency. LITEON will continue to expand its collaboration with NVIDIA to advance high voltage DC power architecture, power conversion, mechanical design, and liquid cooling integration to meet the growing demands for energy efficiency, power density, and resilience in the AI era.

News Summary

LITEON Technology will showcase next-gen AI data center solutions at NVIDIA GTC 2026, including solutions for the NVIDIA Vera Rubin platform and 800 VDC power rack architecture, 110 kW Power Shelf, liquid cooling systems and racks based on NVIDIA MGX, and 2.1 MW in-row CDU. LITEON's 800 VDC solution integrates high-efficiency power modules, DC power distribution, and system-level energy management to meet the dynamic load demands of AI servers, while enhancing liquid-cooling and thermal management flexibility and improving overall operational efficiency. LITEON will continue to expand its collaboration with NVIDIA to advance high voltage DC power architecture, power conversion, mechanical design, and liquid cooling integration to meet the growing demands for energy efficiency, power density, and resilience in the AI era.

SAN JOSE, Calif., March 18, 2026 /PRNewswire/ -- LITEON Technology (2301.tw) participates in NVIDIA GTC 2026 from March 16 to 19, unveiling a comprehensive portfolio of next-generation AI data center solutions. The showcase features solutions designed for NVIDIA Vera Rubin platform, including high-efficiency power systems based on next-generation architectures, advanced rack systems, and liquid cooling technologies. Key highlights include the 800 VDC Power Rack, 110 kW Power Shelf, liquid-cooling systems and racks based on NVIDIA MGX, 2.1 MW in-row CDU, and power bricks. These offerings are designed to accelerate customers' deployment of megawatt-scale AI data centers and address the increasing demands on compute performance and energy management in the AI era.

As AI workloads drive rapid increases in rack power density, data-center power architectures are undergoing structural transformation. Traditional power-shelf architectures face efficiency and current-handling limitations when supporting megawatt-scale AI clusters. In an 800 VDC environment, power architectures are gradually shifting toward a power rack configuration. By increasing system voltage and reducing current load, this approach fundamentally improves power distribution efficiency and overcomes limitations in power density. This evolution represents not only a power-system upgrade, but also a re-architecture of data-center infrastructure.

LITEON's 800 VDC solution integrates high-efficiency power modules, DC distribution designs, and system-level energy-management capabilities. It supports the stringent dynamic-load requirements of AI servers and accelerated computing platforms while enabling greater flexibility for liquid-cooling and thermal-management systems. The high-voltage architecture simplifies the power hierarchy inside data centers and enhances deployment speed and long-term operational efficiency.

"AI data centers are entering a critical phase where power and thermal systems must be re‑architected together," said John Chang, General Manager of LITEON Cloud Infrastructure Platform and Solution SBG. "Power systems are no longer merely supporting functions; they are becoming one of the core elements of data‑center. Through high‑voltage DC designs, we help customers achieve optimal balance among power density, energy efficiency, and infrastructure costs."

As AI computing platforms rapidly expand, LITEON will continue deepening collaboration with ecosystem partners and advancing power-optimization technologies for high-performance GPUs and AI‑accelerated systems. Through close collaboration with NVIDIA in high-performance computing (HPC) and AI infrastructure, LITEON is accelerating its development of next-generation integrated data-center solutions, covering 800 VDC, high-efficiency power conversion, mechanical design, and liquid-cooling integration. These efforts address the structural requirements for high energy efficiency, high power density, and operational resilience in the AI era.

For more information, please visit: https://www.liteon.com/zh-tw/solutions/green-data-center.

LITEON Technology at 2026 GTC:
Date: March 16–19, 2026
Venue: San Jose McEnery Convention Center, USA
Booth: 635

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

LITEON Showcases Next-Generation 800 VDC and NVIDIA Vera Rubin Platform Solutions at NVIDIA GTC 2026

LITEON Showcases Next-Generation 800 VDC and NVIDIA Vera Rubin Platform Solutions at NVIDIA GTC 2026

Recommended Articles