SINGAPORE--(BUSINESS WIRE)--May 8, 2026--
At AI EXPO KOREA 2026, KAYTUS officially launched its All-QLC Flash Storage Solution, engineered to deliver high performance, massive scalability, and cost efficiency for 10,000-GPU clusters. The solution addresses data-delivery bottlenecks in ultra-large-scale AI training, helping maximize GPU resource utilization.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260508313130/en/
Based on the KR2280 and KR1180 server platforms, the solution is deeply integrated with industry-leading AI-native parallel file systems to eliminate data silos inherent in traditional tiered storage. Purpose-built for read-intensive AI workloads, it overcomes the horizontal scaling limitations of massive clusters. Verified test-data shows that, at exabyte-scale deployment, the solution delivers 10 TB/s aggregate bandwidth and 100 million IOPS. In addition, it reduces five-year TCO by 70% compared with traditional TLC-based solutions, accelerating model innovation for AI cloud providers and intelligent computing centers.
Limitations in Traditional AI Storage Architectures.
The explosive growth of AI is fundamentally transforming enterprise computing and storage requirements. Large-scale AI model training features highly read-intensive workloads that require tens of thousands of GPUs to concurrently access exabyte-scale datasets with sub-millisecond latency. Traditional storage architectures now face three major challenges:
KAYTUS Solution: All-QLC Flash Storage for Delivering High Performance, Scalability, and Cost Efficiency.
The next-generation KAYTUS All- QLC Flash Storage Server Solution is purpose-built to unlock the full potential of read-intensive AI training workloads. By tightly integrating flagship compute nodes with industry-leading AI-native parallel file systems, the solution harnesses advanced hardware–software co-design to deliver breakthrough performance, seamless scalability, and superior cost efficiency for ultra-large-scale AI computing environments.
Architectural Innovation: Overcoming AI Training Efficiency Bottlenecks.
The KAYTUS solution establishes a unified namespace with native multi-protocol access across file, object, and block storage. By leveraging high-capacity QLC flash pools and NVMe-oF fully shared interconnects, it redefines the unified data plane for AI storage, effectively eliminating the data silos inherent in traditional tiered architectures. Data can now flow on demand to GPU nodes without cross-system migration, enabling sub-millisecond access, and significantly improving AI training data retrieval efficiency.
10,000-GPU Cluster Benchmarks: Exceptional Performance, Scalability, and Cost Efficiency
In benchmark testing in an exabyte-scale storage environment for a 10,000-GPU data center, the solution—powered by KR2280 and KR1180 nodes and optimized with industry-leading AI-native parallel file systems—demonstrated its capability to scale seamlessly to support computing clusters of up to 10,000 GPUs.
KAYTUS All-Flash Portfolio: From High Density to Massive Capacity.
KAYTUS offers a comprehensive QLC product portfolio supporting single-drive capacities of up to 122.88 TB.
About KAYTUS
KAYTUS is a leading provider of AI infrastructure and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at KAYTUS.com and follow us on LinkedIn and X
KAYTUS Launches All-QLC Flash Storage at AI EXPO 2026 for 10,000-GPU Clusters
