Skip to Content Facebook Feature Image

OceanBase Announces Further International Expansion with New Global Support Center in Malaysia

Business

OceanBase Announces Further International Expansion with New Global Support Center in Malaysia
Business

Business

OceanBase Announces Further International Expansion with New Global Support Center in Malaysia

2026-04-30 14:23 Last Updated At:14:45

Deepens Global Localization Strategy to Provide Robust and Reliable Database Products for Fintech and Key Industries

KUALA LUMPUR, Malaysia, April 30, 2026 /PRNewswire/ -- OceanBase today announced at its INFINITY 2026 conference in Kuala Lumpur the acceleration of globalization strategy. As a cornerstone of this strategy, the company will establish a new global support center in Kuala Lumpur, Malaysia, together with its international headquarters in Singapore. This move is designed to deliver end-to-end, localized services—from solution architects to technical support—to clients across Southeast Asia and around the world.

This strategic expansion marks a new phase in OceanBase's international journey. By establishing a local presence, the company is committed to providing 24/7 global customer service, ensuring robust business continuity.

This expansion builds upon the momentum of the "GO GLOBAL GO Program," a comprehensive initiative launched in October 2025 to drive the international development of OceanBase's products, services, sales, and marketing operations.

Over the past several years, OceanBase has experienced significant international growth, now serving over 4,000 customers worldwide. The company has a particularly strong footprint in the financial technology (fintech) sector, supporting more than 100 fintech customers, including over 20 e-wallets like TNG Digital in Malaysia and GCash in the Philippines, 50 payment services, and 30 innovative fintech firms. Collectively, these customers have over 1.3 billion end-users.

OceanBase has also made significant inroads in banking and finance services sectors, serving more than 400 financial institutions, including banks, insurance companies, and wealth management firms. Notably, over 60% of these institutions have deployed OceanBase for their mission-critical systems.

"Fintech and banking institutions across Southeast Asia are facing a common set of operational pressures: elevated demands on system stability, growing compliance requirements, increasing pressure on cost efficiency and operational simplicity, and new demands as AI moves from pilot projects into production systems. OceanBase is here to support them," said Evan Yang, CEO of OceanBase, at the INFINITY 2026 conference. "We are dedicated to providing a trusted foundation—stable, resilient, scalable, and always ready for whatever the next phase of fintech requires."

To better support global customers, OceanBase is committed to a "multi-cloud native" approach, empowering enterprises to achieve global compliance and flexible deployment. OceanBase ensures a consistent and seamless experience for customers across major cloud platforms, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Alibaba Cloud. Currently, its cloud database service spans over 16 countries and regions, covering more than 60 cloud regions and 200 availability zones, providing a solid foundation for companies to operate globally with a unified architecture.

On the technology front, OceanBase is actively contributing to a vibrant global open-source ecosystem, empowering developers and accelerating technological evolution through open innovation. And with over 500 contributors, the company has deeply engaged with major open-source initiatives, including Apache Flink CDC and AWS Glue for data integration, as well as LangChain, LlamaIndex, and Dify in the rapidly evolving AI space.

About OceanBase

OceanBase is a distributed database launched in 2010. It provides strong data consistency, high availability, high performance, cost efficiency, elastic scalability, and compatibility with mainstream relational databases. It handles transactional, analytical, and AI workloads through a unified data engine, enabling mission-critical applications and real-time analytics.

To learn more, please visit: https://www.oceanbase.com/ 

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

OceanBase Announces Further International Expansion with New Global Support Center in Malaysia

OceanBase Announces Further International Expansion with New Global Support Center in Malaysia

Reduces HBM Costs with GPU–Tenstorrent Heterogeneous Distributed Serving
First unveiled at Tenstorrent's launch event, TT-Deploy, in San Francisco on May 1

SANTA CLARA, Calif., May 2, 2026 /PRNewswire/ -- Moreh, an AI infrastructure software company, led by CEO Gangwon Jo, announced that it has successfully validated LLM inference performance on the Tenstorrent Galaxy Wormhole system using its proprietary 'MoAI Inference Framework.'

Based on tests across leading Mixture-of-Experts (MoE) models—including GPT-OSS, Qwen, GLM, and DeepSeek—Moreh achieved LLM inference performance on Tenstorrent Galaxy Wormhole matching or surpassing NVIDIA DGX A100-class systems, demonstrating a compelling alternative to conventional GPU-centric AI infrastructure.

Moreh also improved cost efficiency by implementing a disaggregated serving architecture that combines GPUs with Tenstorrent Wormhole chips. By utilizing Tenstorrent processors as dedicated prefill accelerators, the company reduced reliance on high-cost HBM and lowered overall infrastructure costs.

The results were first unveiled at Tenstorrent's launch event, TT-Deploy, held on May 1 in San Francisco.

As a strategic partner of Tenstorrent and a major external contributor to Metalium, Moreh showcased a live LLM inference demo at the event. Building on its experience operating AMD GPU-based production environments in real-world data centers, the company presented its latest technical achievements in 'Production-Ready LLM Inference on Tenstorrent Galaxy.'

MoAI Inference Framework is a disaggregated inference solution that enables unified operation of heterogeneous GPUs and NPUs—including NVIDIA, AMD, and Tenstorrent—within a single cluster. This allows enterprises to build flexible AI infrastructure strategies without vendor lock-in.

Moreh CEO Gangwon Jo stated, "Achieving production-grade LLM inference performance and stability on Tenstorrent-based systems marks a significant milestone," and added, "We will continue to enhance performance through deeper optimization across heterogeneous architectures and closer integration with Tenstorrent NPUs."

Moreh is developing its own core AI infrastructure engine and, through its foundation LLM subsidiary Motif Technologies, is building end-to-end capabilities spanning both infrastructure and model domains. Simultaneously, the company is making its mark in the global market through collaborations with key partners such as AMD, Tenstorrent, and SGLang.

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

MOREH Demonstrates Production-Ready LLM Inference on Tenstorrent Galaxy, Achieving DGX A100-Class Performance with Improved Cost Efficiency

MOREH Demonstrates Production-Ready LLM Inference on Tenstorrent Galaxy, Achieving DGX A100-Class Performance with Improved Cost Efficiency

Recommended Articles