Skip to Content Facebook Feature Image

GIGABYTE AORUS Partners with VALORANT Esports EMEA

Business

GIGABYTE AORUS Partners with VALORANT Esports EMEA
Business

Business

GIGABYTE AORUS Partners with VALORANT Esports EMEA

2026-03-24 23:00 Last Updated At:23:15

  • GIGABYTE AORUS will support the VALORANT Champions Tour EMEA and Game Changers EMEA during the 2026 and 2027 seasons.
  • The hardware sponsorship features AORUS PRIME 5 gaming desktop and AORUS MASTER 16 gaming laptops, enabling reliable production and smoother broadcasting.
  • TAIPEI, March 24, 2026 /PRNewswire/ -- GIGABYTE, the world's leading computer brand, today announced a partnership between its premium gaming brand AORUS and VALORANT Esports EMEA. The collaboration will support both VALORANT Champions Tour EMEA and Game Changers EMEA throughout the 2026 and 2027 seasons.

    As part of the partnership, AORUS will provide its flagship systems, including the AORUS PRIME 5 gaming desktop and AORUS MASTER 16 gaming laptop, to power the operational backbone of esports production at the highest level of competition.

    Designed for sustained performance across extended match days, all systems are built to ensure stable and consistent operation under demanding workloads. The AORUS MASTER 16 features an advanced WINDFORCE Infinity EX cooling design, supporting high thermal loads in a compact form factor, while the AORUS PRIME 5 desktop delivers comprehensive system cooling for long-term stability.

    Building on AORUS' gaming and design philosophy, both AORUS PRIME 5 and AORUS MASTER 16 deliver stable performance with low noise during operation. Powered by AORUS' gaming products, the VALORANT Esports EMEA team will ensure reliable, smooth production and broadcasting during tournaments.

    Other than gaming PC systems and laptops, AORUS offers PC components and gaming monitors for hardware enthusiasts and gamers. AORUS is committed to delivering innovative, reliable products and enhancing the gaming experience for all players.

TAIPEI, March 24, 2026 /PRNewswire/ -- GIGABYTE, the world's leading computer brand, today announced a partnership between its premium gaming brand AORUS and VALORANT Esports EMEA. The collaboration will support both VALORANT Champions Tour EMEA and Game Changers EMEA throughout the 2026 and 2027 seasons.

As part of the partnership, AORUS will provide its flagship systems, including the AORUS PRIME 5 gaming desktop and AORUS MASTER 16 gaming laptop, to power the operational backbone of esports production at the highest level of competition.

Designed for sustained performance across extended match days, all systems are built to ensure stable and consistent operation under demanding workloads. The AORUS MASTER 16 features an advanced WINDFORCE Infinity EX cooling design, supporting high thermal loads in a compact form factor, while the AORUS PRIME 5 desktop delivers comprehensive system cooling for long-term stability.

Building on AORUS' gaming and design philosophy, both AORUS PRIME 5 and AORUS MASTER 16 deliver stable performance with low noise during operation. Powered by AORUS' gaming products, the VALORANT Esports EMEA team will ensure reliable, smooth production and broadcasting during tournaments.

Other than gaming PC systems and laptops, AORUS offers PC components and gaming monitors for hardware enthusiasts and gamers. AORUS is committed to delivering innovative, reliable products and enhancing the gaming experience for all players.

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

GIGABYTE AORUS Partners with VALORANT Esports EMEA

GIGABYTE AORUS Partners with VALORANT Esports EMEA

HONG KONG, March 25, 2026 /PRNewswire/ -- On 24 March, the Centre for Artificial Intelligence and Robotics (CAIR), Hong Kong Institute of Science & Innovation (HKISI), and the Chinese Academy of Sciences unveiled "SurgMotion" — a surgical video foundation model — at the Hong Kong Science and Technology Parks Shenzhen Branch. The launch marks a significant leap in surgical AI, from fragmented recognition toward generalized understanding, providing robust support for clinical treatment, surgical procedures, medical education, and post-operation review.

The press conference brought together renowned scholars, clinical experts, industry representatives, and multiple media outlets. Prof. Hongbin LIU, Director and Professor of CAIR; Prof. Nassir Navab, Member of Academia Europaea, IEEE, MICCAI, IAMBE, and AAIA Fellow, Professor at Technical University of Munich (TUM), Director of Chair for Computer Aided Medical Procedures (CAMP); Dr. Wai-Sang POON, Honorary Consultant, Neuromedical Centre, HKU-SZH, Clinical Professor of Surgery, HKU; Professor of Surgery (fractional), CUHK-Shenzhen Faculty of Medicine; Chairman, SZ-HK Specialist Training in Neurosurgery; Prof. Huai LIAO, Deputy Director of Pulmonary and Critical Care Medicine, Director of the Center for Pulmonary Diagnostics and Interventional Therapy, Chief Physician and Professor, The First Affiliated Hospital of Sun Yat-sen University; Dr. Danny T.M. Chan, Clinical Associate Professor (Honorary) and Head of Division of Neurosurgery, Department of Surgery, The Chinese University of Hong Kong (CUHK); Mr. Qiang XIE, Vice President, Wuhan United Imaging Intelligence Medical Technology Co., Ltd.; Mr. Yuanmeng WANG, Deputy Director of the Technology and Talent Department, Hetao Development Authority; and Mr. Hongqiang RONG, Associate Director, Business Development, Hong Kong Science and Technology Parks Corporation (HKSTP), they jointly witnessed this landmark breakthrough in the field of AI surgery.

From Pixel Recognition to Motion Understanding: A Paradigm Shift Through Video-Native Architecture

"SurgMotion" is currently the largest and most comprehensive surgical video general intelligence foundation in the industry, trained on the SurgMotion-15M dataset. This dataset encompasses approximately 15 million frames, representing over 3,658 hours of real surgical video. Given the massive volume of data, "SurgMotion" transcends the limitations of traditional pixel reconstruction by introducing a motion-guided latent-space prediction mechanism. This significantly enhances the model's ability to comprehend key semantic structures, including surgical instruments, anatomical features, and interactive actions. It lays the foundation for universal surgical intelligence across multiple centres, departments, and procedures.

"SurgMotion" supports 13 major human organ categories and six types of surgical understanding tasks, including workflow recognition, action understanding, depth estimation, polyp segmentation, triplet recognition, and skill evaluation. It has achieved state-of-the-art (SOTA) results across 17 internationally authoritative surgical AI benchmarks. Notably, it substantially outperforms existing methods across core tasks such as surgical workflow recognition, instrument interaction understanding, and fine-movement modelling, demonstrating exceptional generalization capability and precision.

Converging Cutting-Edge Insights to Chart a New Blueprint for Smart Healthcare

In his opening address, Prof. Hongbin LIU remarked that last year, CAIR released the "EchoCare"  ultrasound foundation model and the multi-modal medical AI large model CARES 3.0, demonstrating a sustained commitment to research and development. This year, CAIR continues its momentum with the launch of the "SurgMotion" surgical video foundation model, steadily establishing itself as a leading institution in AI-driven healthcare in the Greater Bay Area. He emphasised that the Centre's research is consistently guided by the objevtive of clinical application, with the aim of empowering doctors, benefiting patients, and helping to build a healthier, more efficient healthcare ecosystem.

Prof. Nassir Navab, a key collaborator on the project, highly commended the partnership. He described his collaboration with CAIR as highly productive and enjoyable, noting the team's remarkable efficiency and rapid iteration capabilities. He expressed anticipation for deepening future cooperation to drive further technological breakthroughs.

Open-Sourcing the Model to Build a Foundation for General Surgical AI

During the presentation and promotion of the model, Prof. Dong YI, a researcher at CAIR, formally announced that "SurgMotion", a foundation model with a billion parameters, is fully open-sourced. He outlined the model's design philosophy: surgical videos often contain redundant segments or interfering noises, where traditional self-supervised learning methods may waste computational power and model capacity on low-level details. To address this, the team introduced three technical enhancements on top of the V-JEPA architecture: motion-guided latent space prediction, feature diversity preservation, and model stability preservation. These innovations allow the model to focus more effectively on learning motion and mid-to-high-level semantic information from surgical videos, enabling a more efficient self-supervised training approach.

Beyond technological innovation, the research team constructed the largest known surgical video pre-training data set: SurgMotion-15M. This data set aggregates 3,658 hours of surgical video from 50 sources across 13 anatomical regions, covering multiple specialized fields such as laparoscopy, open surgery, neurosurgery, ophthalmology, and otolaryngology. It provides unprecedented diversity to support the model.

Empowering Clinical Practice to Create a Win-Win for Doctors and Patients

SurgMotion's standardized analytical capabilities can effectively reduce the risks associated with complex surgeries, significantly enhance the standardization of clinical diagnosis and surgical procedures, and provide robust technical support for medical professionals at all levels. In the application case demonstration segment, Dr. Wai-Sang POON, Honorary Consultant Physician at HKU-SZH and Clinical Professor in Surgery at The University of Hong Kong, first presented the model's validation in neurosurgery training. With 35 years of clinical experience, Dr. Poon noted that HKU-SZH, as a neurosurgery specialist training base, has long been working to overcome the standardization challenges of the traditional apprenticeship model in complex surgical education. "SurgMotion" achieved 90% accuracy on multi-centre clinical data in this validation. In the JIGSAWS surgical skill assessment data set, it achieved a minimum mean absolute error (MAE) of 2.649 and a Spearman correlation with expert ratings of 0.770. Its performance far exceeds that of a similar model. With its precise motion analysis and objective evaluation capabilities, the system is poised to become a reliable teaching aid, assisting young surgeons in standardized surgical reviews and greatly advancing the digitization and standardization of specialist training.

Prof. Huai LIAO, Deputy Director of the Department of Respiratory and Critical Care Medicine at The First Affiliated Hospital, Sun Yat-sen University, then demonstrated the model's application in the field of interventional pulmonology procedures. He pointed out that interventional pulmonology is advancing toward deeper and more precise procedures. This progression urgently requires powerful AI vision models to provide the necessary technological underpinning.  "SurgMotion" demonstrated exceptional performance, achieving overall superiority in the critical tasks of image segmentation and depth estimation, with outstanding lesion contouring precision and minimal depth error. When tested with real clinical video data from his hospital, the model achieved approximately 85% accuracy in identifying respiratory interventional procedures. Such powerful perceptual capability, which truly understands the surgery, is set to profoundly empower bronchoscopic robotics and significantly enhance clinical precision and safety.

Focusing on Technological Deployment and Exploring Pathways for Industrial Translation

In the media Q&A session, Prof. Hongbin LIU, Dr. Wai-Sang POON, Prof. Huai LIAO, Dr. Danny Chan Tat Ming, and Prof. Dong YI collectively answered questions from the press, engaging in an in-depth discussion of the technical specifics, clinical application prospects, and pathways for the industrialization of "SurgMotion". The open-sourcing of "SurgMotion" by CAIR is set to accelerate the large-scale deployment of AI in surgery, continuously injecting momentum into medical technology innovation in the Guangdong-Hong Kong-Macao Greater Bay Area.

Established in 2019, the Centre for Artificial Intelligence and Robotics (CAIR) is one of the two centres under Hong Kong Institute of Science & Innovation (the only directly affiliated research institute of Chinese Academy of Sciences in Hong Kong).CAIR is dedicated to integration and  innovation of  artificial intelligence and life sciences, conducting research in three main areas: Multimodal AI Large Model, Embodied Intelligent Robots, and Intelligent Sensing Technologies. CAIR is a key institution supported by Hong Kong's InnoHK initiative in the field of AI. It is among the few institutions globally that systematically carry out research and development of AI systems for medical and healthcare applications, as well as their technological transformation.

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

Open-Sourcing to Empower, AI to Lead Medicine: "SurgMotion", the Best-in-class Surgical Video Foundation Model, Officially Launched

Open-Sourcing to Empower, AI to Lead Medicine: "SurgMotion", the Best-in-class Surgical Video Foundation Model, Officially Launched

Recommended Articles