Skip to Content Facebook Feature Image

WEKA Debuts New Solution Blueprint to Simplify AI Inferencing at Scale

Business

WEKA Debuts New Solution Blueprint to Simplify AI Inferencing at Scale
Business

Business

WEKA Debuts New Solution Blueprint to Simplify AI Inferencing at Scale

2024-11-20 03:06 Last Updated At:03:25

WARRP Reference Architecture Provides Comprehensive Modular Solution That Accelerates the Development of RAG-based Inferencing Environments

ATLANTA and CAMPBELL, Calif., Nov. 20, 2024 /PRNewswire/ -- From Supercomputing 2024: WEKA, the AI-native data platform company, debuted a new reference architecture solution to simplify and streamline the development and implementation of enterprise AI inferencing environments. The WEKA AI RAG Reference Platform (WARRP) provides generative AI (GenAI) developers and cloud architects with a design blueprint for the development of a robust inferencing infrastructure framework that incorporates retrieval-augmented generation (RAG), a technique used in the AI inference process to enable large language models (LLMs) to gather new data from external sources.

The Criticality of RAG in Building Safe, Reliable AI Operations
According to a recent study of global AI trends conducted by S&P Global Market Intelligence, GenAI has rapidly emerged as the most highly adopted AI modality, eclipsing all other AI applications in the enterprise.[1]

A primary challenge enterprises face when deploying LLMs is ensuring they can effectively retrieve and contextualize new data across multiple environments and from external sources to aid in AI inference. RAG is the leading technique for AI inference, and it is used to enhance trained AI models by safely retrieving new insights from external data sources. Using RAG in the inferencing process can help reduce AI model hallucinations and improve output accuracy, reliability and richness, reducing the need for costly retraining cycles.

However, creating robust production-ready inferencing environments that can support RAG frameworks at scale is complex and challenging, as architectures, best practices, tools, and testing strategies are still rapidly evolving.

A Comprehensive Blueprint for Inferencing Acceleration
With WARRP, WEKA has defined an infrastructure-agnostic reference architecture that can be leveraged to build and deploy production-quality, high-performance RAG solutions at scale.

Designed to help organizations quickly build and implement RAG-based AI inferencing pipelines, WARRP provides a comprehensive blueprint of modular components that can be used to quickly develop and deploy a world-class AI inference environment optimized for workload portability, distributed global data centers and multicloud environments.

The WARRP reference architecture builds on WEKA® Data Platform software running on an organization's preferred cloud or server hardware as its foundational layer. It then incorporates class-leading enterprise AI frameworks from NVIDIA — including NVIDIA NIMâ„¢ microservices and NVIDIA NeMoâ„¢ Retriever, both part of the NVIDIA AI Enterprise software platform — advanced AI workload and GPU orchestration capabilities from Run:ai and popular commercial and open-source data management software technologies like Kubernetes for data orchestration, and Milvus Vector DB for data ingestion.

"As the first wave of generative AI technologies began moving into the enterprise in 2023, most organizations' compute and data infrastructure resources were focused on AI model training. As GenAI models and applications have matured, many enterprises are now preparing to shift these resources to focus on inferencing but may not know where to begin," said Shimon Ben-David, chief technology officer at WEKA. "Running AI inferencing at scale is extremely challenging. We are developing the WEKA AI RAG Architecture Platform on leading AI and cloud infrastructure solutions from WEKA, NVIDIA, Run:ai, Kubernetes, Milvus, and others to provide a robust production-ready blueprint that streamlines the process of implementing RAG to improve the accuracy, security and cost of running enterprise AI models."

WARRP delivers a flexible, modular framework that can support a variety of LLM deployments, offering scalability, adaptability, and exceptional performance in production environments. Key benefits include:

  • Build a Production-Ready Inferencing Environment Faster: WARRP's infrastructure and cloud-agnostic architecture can be used by GenAI developers and cloud architects to streamline GenAI application development and run inferencing operations at scale faster. It seamlessly integrates with an organization's existing and future AI infrastructure components, large and small language models, and preferred server, hyperscale or specialty AI cloud providers, giving organizations exceptional flexibility and choice in architecting their AI inference stack.
  • Hardware, Software, and Cloud Agnostic: WARRP's modular design supports most major server and cloud service providers. The architecture enables organizations to easily achieve workload portability without compromising performance by allowing AI practitioners to run the same workload on their preferred hyperscale cloud platform, AI cloud service, or on-premises server hardware with minimal configuration changes. Whether deployed in a public, private, or hybrid cloud environment, AI pipelines demonstrate stable behavior and predictable results, simplifying hybrid and multicloud operations.
  • End-to-End AI Inferencing Stack Optimization: Running RAG pipelines can be highly demanding, especially when dealing with large model repositories and complex AI workloads. Organizations can achieve significant performance improvements by integrating the WEKA Data Platform into their AI inferencing stack, particularly in multi-model inference scenarios. The WEKA Data Platform's ability to load and unload models efficiently further accelerates and efficiently delivers tokens for user prompts, particularly in complex, chained inference workflows involving multiple AI models.

"As AI adoption accelerates, there is a critical need for simplified ways to deploy production workloads at scale. Meanwhile, RAG-based inferencing is emerging as an important frontier in the AI innovation race, bringing new considerations for an organization's underlying data infrastructure," said Ronen Dar, chief technology officer at Run:ai. "The WARRP reference architecture provides an excellent solution for customers building an inference environment, providing an essential blueprint to help them develop quickly, flexibly and securely using industry-leading components from NVIDIA, WEKA and Run:ai to maximize GPU utilization across private, public and hybrid cloud environments. This combination is a win-win for customers who want to outpace their competition on the cutting edge of AI innovation."

"Enterprises are looking for a simple way to embed their data to build and deploy RAG pipelines," said Amanda Saunders, director of Enterprise Generative AI software, NVIDIA. "Using NVIDIA NIM and NeMo with WEKA, will give enterprise customers a fast path to develop, deploy and run high-performance AI inference and RAG operations at scale."

The first release of the WARRP reference architecture is now available for free download. Visit https://www.weka.io/resources/reference-architecture/warrp-weka-ai-rag-reference-platform/ to obtain a copy.

Supercomputing 2024 attendees can visit WEKA in Booth #1931 for more details and a demo of the new solution. 

Supporting AI Cloud Service Provider Quotes

Applied Digital
"As companies increasingly harness advanced AI and GenAI inferencing to empower their customers and employees, they recognize the benefits of leveraging RAG for greater simplicity, functionality and efficiency," said Mike Maniscalco, chief technology officer at Applied Digital. "WEKA's WARRP stack provides a highly useful reference framework to deliver RAG pipelines into a production deployment at scale, supported by powerful NVIDIA technology and reliable, scalable cloud infrastructure."

Ori Cloud
"Leading GenAI companies are running on Ori Cloud to train the world's largest LLMs and achieving maximum GPU utilization thanks to our integration with the WEKA Data Platform," said Mahdi Yahya, founder and chief executive officer at Ori Cloud. "We look forward to working with WEKA to build robust inference solutions using the WARRP architecture to help Ori Cloud customers maximize the benefits of RAG pipelines to accelerate their AI innovation."

Yotta
"To run AI effectively, speed, flexibility, and scalability are required. Yotta's AI solutions, powered by NVIDIA GPUs and built on the WEKA Data Platform, are helping organizations to push the boundaries of what's possible in AI, offering unparalleled performance and flexible scale," said Sunil Gupta, chief executive officer at Yotta. "We look forward to collaborating with WEKA to further enhance our Inference-as-a-Service offerings for natural-language processing, computer vision, and generative AI leveraging the WARRP reference architecture and NVIDIA NIM microservices."

About WEKA 
WEKA is architecting a new approach to the enterprise data stack built for the AI era. The WEKA® Data Platform sets the standard for AI infrastructure with a cloud and AI-native architecture that can be deployed anywhere, providing seamless data portability across on-premises, cloud, and edge environments. It transforms legacy data silos into dynamic data pipelines that accelerate GPUs, AI model training and inference, and other performance-intensive workloads, enabling them to work more efficiently, consume less energy, and reduce associated carbon emissions. WEKA helps the world's most innovative enterprises and research organizations overcome complex data challenges to reach discoveries, insights, and outcomes faster and more sustainably – including 12 of the Fortune 50. Visit www.weka.io to learn more or connect with WEKA on LinkedIn, X, and Facebook.

WEKA and the WEKA logo are registered trademarks of WekaIO, Inc. Other trade names used herein may be trademarks of their respective owners. 

[1] 2024 Global Trends in AI, September 2024, S&P Global Market Intelligence



WARRP Reference Architecture Provides Comprehensive Modular Solution That Accelerates the Development of RAG-based Inferencing Environments

ATLANTA and CAMPBELL, Calif., Nov. 20, 2024 /PRNewswire/ -- From Supercomputing 2024: WEKA, the AI-native data platform company, debuted a new reference architecture solution to simplify and streamline the development and implementation of enterprise AI inferencing environments. The WEKA AI RAG Reference Platform (WARRP) provides generative AI (GenAI) developers and cloud architects with a design blueprint for the development of a robust inferencing infrastructure framework that incorporates retrieval-augmented generation (RAG), a technique used in the AI inference process to enable large language models (LLMs) to gather new data from external sources.

The Criticality of RAG in Building Safe, Reliable AI Operations
According to a recent study of global AI trends conducted by S&P Global Market Intelligence, GenAI has rapidly emerged as the most highly adopted AI modality, eclipsing all other AI applications in the enterprise.[1]

A primary challenge enterprises face when deploying LLMs is ensuring they can effectively retrieve and contextualize new data across multiple environments and from external sources to aid in AI inference. RAG is the leading technique for AI inference, and it is used to enhance trained AI models by safely retrieving new insights from external data sources. Using RAG in the inferencing process can help reduce AI model hallucinations and improve output accuracy, reliability and richness, reducing the need for costly retraining cycles.

However, creating robust production-ready inferencing environments that can support RAG frameworks at scale is complex and challenging, as architectures, best practices, tools, and testing strategies are still rapidly evolving.

A Comprehensive Blueprint for Inferencing Acceleration
With WARRP, WEKA has defined an infrastructure-agnostic reference architecture that can be leveraged to build and deploy production-quality, high-performance RAG solutions at scale.

Designed to help organizations quickly build and implement RAG-based AI inferencing pipelines, WARRP provides a comprehensive blueprint of modular components that can be used to quickly develop and deploy a world-class AI inference environment optimized for workload portability, distributed global data centers and multicloud environments.

The WARRP reference architecture builds on WEKA® Data Platform software running on an organization's preferred cloud or server hardware as its foundational layer. It then incorporates class-leading enterprise AI frameworks from NVIDIA — including NVIDIA NIMâ„¢ microservices and NVIDIA NeMoâ„¢ Retriever, both part of the NVIDIA AI Enterprise software platform — advanced AI workload and GPU orchestration capabilities from Run:ai and popular commercial and open-source data management software technologies like Kubernetes for data orchestration, and Milvus Vector DB for data ingestion.

"As the first wave of generative AI technologies began moving into the enterprise in 2023, most organizations' compute and data infrastructure resources were focused on AI model training. As GenAI models and applications have matured, many enterprises are now preparing to shift these resources to focus on inferencing but may not know where to begin," said Shimon Ben-David, chief technology officer at WEKA. "Running AI inferencing at scale is extremely challenging. We are developing the WEKA AI RAG Architecture Platform on leading AI and cloud infrastructure solutions from WEKA, NVIDIA, Run:ai, Kubernetes, Milvus, and others to provide a robust production-ready blueprint that streamlines the process of implementing RAG to improve the accuracy, security and cost of running enterprise AI models."

WARRP delivers a flexible, modular framework that can support a variety of LLM deployments, offering scalability, adaptability, and exceptional performance in production environments. Key benefits include:

  • Build a Production-Ready Inferencing Environment Faster: WARRP's infrastructure and cloud-agnostic architecture can be used by GenAI developers and cloud architects to streamline GenAI application development and run inferencing operations at scale faster. It seamlessly integrates with an organization's existing and future AI infrastructure components, large and small language models, and preferred server, hyperscale or specialty AI cloud providers, giving organizations exceptional flexibility and choice in architecting their AI inference stack.
  • Hardware, Software, and Cloud Agnostic: WARRP's modular design supports most major server and cloud service providers. The architecture enables organizations to easily achieve workload portability without compromising performance by allowing AI practitioners to run the same workload on their preferred hyperscale cloud platform, AI cloud service, or on-premises server hardware with minimal configuration changes. Whether deployed in a public, private, or hybrid cloud environment, AI pipelines demonstrate stable behavior and predictable results, simplifying hybrid and multicloud operations.
  • End-to-End AI Inferencing Stack Optimization: Running RAG pipelines can be highly demanding, especially when dealing with large model repositories and complex AI workloads. Organizations can achieve significant performance improvements by integrating the WEKA Data Platform into their AI inferencing stack, particularly in multi-model inference scenarios. The WEKA Data Platform's ability to load and unload models efficiently further accelerates and efficiently delivers tokens for user prompts, particularly in complex, chained inference workflows involving multiple AI models.

"As AI adoption accelerates, there is a critical need for simplified ways to deploy production workloads at scale. Meanwhile, RAG-based inferencing is emerging as an important frontier in the AI innovation race, bringing new considerations for an organization's underlying data infrastructure," said Ronen Dar, chief technology officer at Run:ai. "The WARRP reference architecture provides an excellent solution for customers building an inference environment, providing an essential blueprint to help them develop quickly, flexibly and securely using industry-leading components from NVIDIA, WEKA and Run:ai to maximize GPU utilization across private, public and hybrid cloud environments. This combination is a win-win for customers who want to outpace their competition on the cutting edge of AI innovation."

"Enterprises are looking for a simple way to embed their data to build and deploy RAG pipelines," said Amanda Saunders, director of Enterprise Generative AI software, NVIDIA. "Using NVIDIA NIM and NeMo with WEKA, will give enterprise customers a fast path to develop, deploy and run high-performance AI inference and RAG operations at scale."

The first release of the WARRP reference architecture is now available for free download. Visit https://www.weka.io/resources/reference-architecture/warrp-weka-ai-rag-reference-platform/ to obtain a copy.

Supercomputing 2024 attendees can visit WEKA in Booth #1931 for more details and a demo of the new solution. 

Supporting AI Cloud Service Provider Quotes

Applied Digital
"As companies increasingly harness advanced AI and GenAI inferencing to empower their customers and employees, they recognize the benefits of leveraging RAG for greater simplicity, functionality and efficiency," said Mike Maniscalco, chief technology officer at Applied Digital. "WEKA's WARRP stack provides a highly useful reference framework to deliver RAG pipelines into a production deployment at scale, supported by powerful NVIDIA technology and reliable, scalable cloud infrastructure."

Ori Cloud
"Leading GenAI companies are running on Ori Cloud to train the world's largest LLMs and achieving maximum GPU utilization thanks to our integration with the WEKA Data Platform," said Mahdi Yahya, founder and chief executive officer at Ori Cloud. "We look forward to working with WEKA to build robust inference solutions using the WARRP architecture to help Ori Cloud customers maximize the benefits of RAG pipelines to accelerate their AI innovation."

Yotta
"To run AI effectively, speed, flexibility, and scalability are required. Yotta's AI solutions, powered by NVIDIA GPUs and built on the WEKA Data Platform, are helping organizations to push the boundaries of what's possible in AI, offering unparalleled performance and flexible scale," said Sunil Gupta, chief executive officer at Yotta. "We look forward to collaborating with WEKA to further enhance our Inference-as-a-Service offerings for natural-language processing, computer vision, and generative AI leveraging the WARRP reference architecture and NVIDIA NIM microservices."

About WEKA 
WEKA is architecting a new approach to the enterprise data stack built for the AI era. The WEKA® Data Platform sets the standard for AI infrastructure with a cloud and AI-native architecture that can be deployed anywhere, providing seamless data portability across on-premises, cloud, and edge environments. It transforms legacy data silos into dynamic data pipelines that accelerate GPUs, AI model training and inference, and other performance-intensive workloads, enabling them to work more efficiently, consume less energy, and reduce associated carbon emissions. WEKA helps the world's most innovative enterprises and research organizations overcome complex data challenges to reach discoveries, insights, and outcomes faster and more sustainably – including 12 of the Fortune 50. Visit www.weka.io to learn more or connect with WEKA on LinkedIn, X, and Facebook.

WEKA and the WEKA logo are registered trademarks of WekaIO, Inc. Other trade names used herein may be trademarks of their respective owners. 

[1] 2024 Global Trends in AI, September 2024, S&P Global Market Intelligence

** The press release content is from PR Newswire. Bastille Post is not involved in its creation. **

WEKA Debuts New Solution Blueprint to Simplify AI Inferencing at Scale

WEKA Debuts New Solution Blueprint to Simplify AI Inferencing at Scale

  • Resumes services to Fukuoka, Japan from September 2026 after nearly two decades
  • Expands China footprint with new direct services to Shenzhen and Changsha, commencing July 2026
  • "MAG Arena" showcases Asia's Largest Airline Pavilion at MATTA Fair 2026, featuring the region's premier sports partnership activation by an airline
  • Increases frequencies across ASEAN, South Asia, Australia and New Zealand and Europe to further strengthen global connectivity

SINGAPORE, April 5, 2026 /PRNewswire/ -- Malaysia Airlines is significantly expanding its East Asia footprint with the return of direct flights to Fukuoka, Japan, and the launch of new services to Shenzhen and Changsha, China. Commencing between July and September 2026, these additions bring the airline's China network to nine key gateways and reinforce its commitment to providing greater travel flexibility across the region.

As part of this expansion, the airline will introduce new services between Kuala Lumpur and Shenzhen (SZX) and Changsha (CSX) in China, alongside the resumption of services to Fukuoka (FUK) in Japan, which the airline last operated in September 2006. With the launch of these destinations, Malaysia Airlines consolidates its presence across a total of nine strategic destinations in China, including Beijing (PKX), Shanghai (PVG), Guangzhou (CAN), Xiamen (XMN), Hong Kong (HKG), Taipei (TPE), and Chengdu Tianfu (TFU). Ticket sales for the new services commence today, supporting the growing travel demand and strengthening connectivity between Malaysia and these high-growth regional hubs.

Captain Nasaruddin A. Bakar, President and Group Chief Executive Officer of Malaysia Aviation Group (MAG), said, "This expansion reflects our strategic focus on scaling our presence in key growth markets across East Asia while cementing Kuala Lumpur's position as a key strategic gateway. Both Shenzhen and Changsha align perfectly with our network strategy, driven by robust demand across both business and leisure segments. The return to Fukuoka further enhances our network depth. As the only carrier operating direct flights on this route, we are proud to offer passengers a seamless non-stop experience that eliminates the need for transit. These developments demonstrate our ongoing commitment to optimising our network and delivering a more integrated travel experience for our customers."

Beyond the East Asia expansion, Malaysia Airlines is increasing flight frequencies across key routes namely Brisbane, Australia; Manila, Philippines and Colombo, Sri Lanka to meet rising demand while supporting growing tourism and trade links. In addition, the airline will operate ad-hoc Kuala Lumpur–London flights on 18 and 22 April 2026 to accommodate passengers affected by recent Middle Eastern carrier disruptions.

As the returning Official Airline Partner and Premier Sponsor of MATTA Fair 2026, MAG unveiled its most ambitious presence yet with the launch of the MAG Arena, recognised by both the Asia Records and ASEAN Records as Asia's Largest Airline Trade Pavilion at a consumer travel fair.

Spanning approximately 46,000 square feet, nearly three times the scale of its participation in September last year, the expanded pavilion transforms the MATTA Fair experience into a fully immersive destination showcasing curated experiences and next-generation travel technology that brings journeys to life before travellers even board the aircraft. Visitors will be able to explore destinations, discover travel innovations and experience the warmth of Malaysian Hospitality through interactive engagements designed to inspire their next journey.

In addition, the pavilion will host Asia's largest sports partnership activation by an airline, celebrating Malaysia Airlines' collaborations with global clubs like Manchester United and national sporting icons Datuk Azizulhasni Awang and others. The dedicated sports experience zone will allow football fans and travellers to engage with legends and their favourite sports personalities — reinforcing how sport and travel connect people across borders and generations.

Through the expansion of its network and increased flight frequencies, the airline continues to strengthen Kuala Lumpur's position as a key gateway to Asia and beyond, while supporting Malaysia's tourism ambitions under Visit Malaysia 2026 and advancing its journey towards becoming one of the world's Top 10 global airlines by 2030.

-ENDS-

New Routes

Airline

Route

Frequency

Date Open for Sale

Inaugural Flight

Malaysia Airlines

Kuala Lumpur –

Shenzhen (SZX)

7x weekly

(Mon-Sun)

3 April 2026

1 July 2026

Kuala Lumpur –

Changsha (CSX)

7x weekly

(Mon-Sun)

3 April 2026

8 July 2026

Kuala Lumpur -Fukuoka (FUK)

5x weekly

(Mon, Wed, Fri, Sat, Sun)

3 April 2026

2 Sept 2026

Malaysia Airlines' Additional Frequencies

Region

Route

Frequency

(Before Increase)

Frequency
(After Increase)

Effective Date

Europe

KUL/London (LHR)

14x weekly

16x weekly

18 & 22 Apr 2026

Australia and New Zealand

KUL/Brisbane (BNE) vv

5x weekly

6x weekly 7x weekly

16 Aug 2026

25 Oct 2026

ASEAN

KUL/Manila

(MNL) vv

21x weekly

28x weekly

1 Jul 2026

South Asia

KUL/Colombo (CMB) vv

7x weekly

8x weekly

9x weekly 10x weekly

3 Apr 2026

3 May 2026

20 May 2026

About Malaysia Aviation Group

Malaysia Aviation Group (MAG) is a global aviation organisation comprising three core business portfolios: Airline Business, Loyalty & Travel Services, and Aviation Services.

The Airline Business portfolio serves global, domestic, and segmented markets through Malaysia Airlines the national carrier; Firefly the regional airline focused on connecting communities across Malaysia and ASEAN; and Amal by Malaysia Airlines – the leading one-stop pilgrimage travel solutions centre.

The Aviation Services portfolio offers a full suite of capabilities, comprising MAB Engineering, the maintenance, repair and overhaul services provider; MASkargo, the cargo and logistics solutions provider; AeroDarat Services, the ground handling services provider; MAG Culinary Solutions, overseeing all F&B-related strategies, operations and services across MAG; and MAB Academy, the centre of excellence for aviation and hospitality training.

The Loyalty & Travel Services portfolio delivers end-to-end travel solutions and loyalty programmes, strengthening MAG's core expertise in airline and aviation services. It includes Journify – an integrated digital platform offering travel and lifestyle experiences; Enrich – Malaysia Airlines' award-winning travel and lifestyle loyalty programme; and MHholidays – the Group's dedicated flight and hotel package platform.

With its clear business portfolios, MAG is committed to realising its vision of becoming Asia's Leading Travel and Aviation Services Group by delivering exceptional customer experiences, nurturing a culture that empowers its people, and ensuring sustainable, profitable growth.

For more information, visit www.malaysiaaviationgroup.com.my

Issued by Group Communications, Malaysia Aviation Group.

 



  • Resumes services to Fukuoka, Japan from September 2026 after nearly two decades
  • Expands China footprint with new direct services to Shenzhen and Changsha, commencing July 2026
  • "MAG Arena" showcases Asia's Largest Airline Pavilion at MATTA Fair 2026, featuring the region's premier sports partnership activation by an airline
  • Increases frequencies across ASEAN, South Asia, Australia and New Zealand and Europe to further strengthen global connectivity

SINGAPORE, April 5, 2026 /PRNewswire/ -- Malaysia Airlines is significantly expanding its East Asia footprint with the return of direct flights to Fukuoka, Japan, and the launch of new services to Shenzhen and Changsha, China. Commencing between July and September 2026, these additions bring the airline's China network to nine key gateways and reinforce its commitment to providing greater travel flexibility across the region.

As part of this expansion, the airline will introduce new services between Kuala Lumpur and Shenzhen (SZX) and Changsha (CSX) in China, alongside the resumption of services to Fukuoka (FUK) in Japan, which the airline last operated in September 2006. With the launch of these destinations, Malaysia Airlines consolidates its presence across a total of nine strategic destinations in China, including Beijing (PKX), Shanghai (PVG), Guangzhou (CAN), Xiamen (XMN), Hong Kong (HKG), Taipei (TPE), and Chengdu Tianfu (TFU). Ticket sales for the new services commence today, supporting the growing travel demand and strengthening connectivity between Malaysia and these high-growth regional hubs.

Captain Nasaruddin A. Bakar, President and Group Chief Executive Officer of Malaysia Aviation Group (MAG), said, "This expansion reflects our strategic focus on scaling our presence in key growth markets across East Asia while cementing Kuala Lumpur's position as a key strategic gateway. Both Shenzhen and Changsha align perfectly with our network strategy, driven by robust demand across both business and leisure segments. The return to Fukuoka further enhances our network depth. As the only carrier operating direct flights on this route, we are proud to offer passengers a seamless non-stop experience that eliminates the need for transit. These developments demonstrate our ongoing commitment to optimising our network and delivering a more integrated travel experience for our customers."

Beyond the East Asia expansion, Malaysia Airlines is increasing flight frequencies across key routes namely Brisbane, Australia; Manila, Philippines and Colombo, Sri Lanka to meet rising demand while supporting growing tourism and trade links. In addition, the airline will operate ad-hoc Kuala Lumpur–London flights on 18 and 22 April 2026 to accommodate passengers affected by recent Middle Eastern carrier disruptions.

As the returning Official Airline Partner and Premier Sponsor of MATTA Fair 2026, MAG unveiled its most ambitious presence yet with the launch of the MAG Arena, recognised by both the Asia Records and ASEAN Records as Asia's Largest Airline Trade Pavilion at a consumer travel fair.

Spanning approximately 46,000 square feet, nearly three times the scale of its participation in September last year, the expanded pavilion transforms the MATTA Fair experience into a fully immersive destination showcasing curated experiences and next-generation travel technology that brings journeys to life before travellers even board the aircraft. Visitors will be able to explore destinations, discover travel innovations and experience the warmth of Malaysian Hospitality through interactive engagements designed to inspire their next journey.

In addition, the pavilion will host Asia's largest sports partnership activation by an airline, celebrating Malaysia Airlines' collaborations with global clubs like Manchester United and national sporting icons Datuk Azizulhasni Awang and others. The dedicated sports experience zone will allow football fans and travellers to engage with legends and their favourite sports personalities — reinforcing how sport and travel connect people across borders and generations.

Through the expansion of its network and increased flight frequencies, the airline continues to strengthen Kuala Lumpur's position as a key gateway to Asia and beyond, while supporting Malaysia's tourism ambitions under Visit Malaysia 2026 and advancing its journey towards becoming one of the world's Top 10 global airlines by 2030.

-ENDS-

New Routes

Airline

Route

Frequency

Date Open for Sale

Inaugural Flight

Malaysia Airlines

Kuala Lumpur –

Shenzhen (SZX)

7x weekly

(Mon-Sun)

3 April 2026

1 July 2026

Kuala Lumpur –

Changsha (CSX)

7x weekly

(Mon-Sun)

3 April 2026

8 July 2026

Kuala Lumpur -Fukuoka (FUK)

5x weekly

(Mon, Wed, Fri, Sat, Sun)

3 April 2026

2 Sept 2026

Malaysia Airlines' Additional Frequencies

Region

Route

Frequency

(Before Increase)

Frequency
(After Increase)

Effective Date

Europe

KUL/London (LHR)

14x weekly

16x weekly

18 & 22 Apr 2026

Australia and New Zealand

KUL/Brisbane (BNE) vv

5x weekly

6x weekly 7x weekly

16 Aug 2026

25 Oct 2026

ASEAN

KUL/Manila

(MNL) vv

21x weekly

28x weekly

1 Jul 2026

South Asia

KUL/Colombo (CMB) vv

7x weekly

8x weekly

9x weekly 10x weekly

3 Apr 2026

3 May 2026

20 May 2026

Region

Route

Frequency

(Before Increase)

Frequency
(After Increase)

Effective Date

Europe

KUL/London (LHR)

14x weekly

16x weekly

18 & 22 Apr 2026

Australia and New Zealand

KUL/Brisbane (BNE) vv

5x weekly

6x weekly 7x weekly

16 Aug 2026

25 Oct 2026

ASEAN

KUL/Manila

(MNL) vv

21x weekly

28x weekly

1 Jul 2026

South Asia

KUL/Colombo (CMB) vv

7x weekly

8x weekly

9x weekly 10x weekly

3 Apr 2026

3 May 2026

20 May 2026

About Malaysia Aviation Group

Malaysia Aviation Group (MAG) is a global aviation organisation comprising three core business portfolios: Airline Business, Loyalty & Travel Services, and Aviation Services.

The Airline Business portfolio serves global, domestic, and segmented markets through Malaysia Airlines the national carrier; Firefly the regional airline focused on connecting communities across Malaysia and ASEAN; and Amal by Malaysia Airlines – the leading one-stop pilgrimage travel solutions centre.

The Aviation Services portfolio offers a full suite of capabilities, comprising MAB Engineering, the maintenance, repair and overhaul services provider; MASkargo, the cargo and logistics solutions provider; AeroDarat Services, the ground handling services provider; MAG Culinary Solutions, overseeing all F&B-related strategies, operations and services across MAG; and MAB Academy, the centre of excellence for aviation and hospitality training.

The Loyalty & Travel Services portfolio delivers end-to-end travel solutions and loyalty programmes, strengthening MAG's core expertise in airline and aviation services. It includes Journify – an integrated digital platform offering travel and lifestyle experiences; Enrich – Malaysia Airlines' award-winning travel and lifestyle loyalty programme; and MHholidays – the Group's dedicated flight and hotel package platform.

With its clear business portfolios, MAG is committed to realising its vision of becoming Asia's Leading Travel and Aviation Services Group by delivering exceptional customer experiences, nurturing a culture that empowers its people, and ensuring sustainable, profitable growth.

For more information, visit www.malaysiaaviationgroup.com.my

Issued by Group Communications, Malaysia Aviation Group.

 

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

Malaysia Airlines Strengthens East Asia Network with Return to Fukuoka, Launch of New Routes and Increased Frequencies Across Key Markets

Malaysia Airlines Strengthens East Asia Network with Return to Fukuoka, Launch of New Routes and Increased Frequencies Across Key Markets

Recommended Articles