Skip to Content Facebook Feature Image

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

Business

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications
Business

Business

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

2025-01-09 08:00 Last Updated At:08:15

HANGZHOU, China, Jan. 9, 2025 /PRNewswire/ -- Recently, SpacemiT, a RISC-V AI CPU company from China, announced breakthrough progress in the development of its server CPU chip SpacemiT Vital Stone® V100. It now provides a complete RISC-V CPU chip hardware and software platform that fully supports server specifications.

Key IPs:

RISC-V CPU core X100, AIA and APLIC supporting interrupt virtualization, IOMMU supporting memory virtualization, IOPMP supporting security functions, LPC and eSPI supporting communication with mainstream BMCs, etc.

  • The 64-bit server-grade RISC-V CPU core X100 delivers a single-core performance of >9 points/GHz on SPECINT2006 at 2.5GHz@12nm. X100 supports the RVA23 Profile, full virtualization (Hypervisor 1.0, AIA 1.0, IOMMU), RAS features, Vector 1.0 extension, vector encryption and decryption, security, 64-core interconnect, and more.
  • The IOMMU IP adheres to the RISC-V IOMMU architecture specification and the AXI4-Stream DTI interface, supporting configurable DID, PID, virtual address, physical address width, and various levels of translation cache sizes. It can be flexibly integrated into different locations within the SoC bus system to enable distributed peripheral virtualization and accelerator acceleration.

Key Subsystems:

Including CPU subsystem, bus subsystem, IOMMU subsystem, interrupt subsystem, debug & trace subsystem, clock & reset subsystem, RMU management and control subsystem, etc., thereby realizing the development of the server CPU chip platform.

Software R&D Progress:

Based on the self-developed server CPU chip platform, the development of server platform firmware that complies with the RISC-V BRS Spec specification has been completed. This includes openSBI/UEFI (BIOS)/Linux and other low-level software that meets the requirements of the Supervisor Binary Interface (SBI), UEFI (BIOS), SMBIOS, ACPI, and other specifications. The Linux operating system has been adapted and ported, and it supports the GlobalPlatform-standard OP-TEE secure operating system. The platform firmware and operating system can now be successfully run and demonstrated on an FPGA of the server CPU chip platform.

About SpacemiT:

SpacemiT is a computing ecosystem enterprise based on the new-generation RISC-V architecture, with a layout covering full-stack computing technologies such as high-performance RISC-V CPU cores, AI-CPU cores, AI CPU chips, and software systems. It provides end-to-end computing system solutions and is committed to building the best native computing platform for the new AI era of large models using RISC-V AI CPUs, thereby promoting the development of new applications such as AI computers and AI robots. Please visit https://www.spacemit.com/en/ for more information.

Business Contact
business@spacemit.com 

Media Contact
media@spacemit.com 

** The press release content is from PR Newswire. Bastille Post is not involved in its creation. **

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

RISC-V Breakthrough: SpacemiT Develops Server CPU Chip V100 for Next-Generation AI Applications

Reduces HBM Costs with GPU–Tenstorrent Heterogeneous Distributed Serving
First unveiled at Tenstorrent's launch event, TT-Deploy, in San Francisco on May 1

SANTA CLARA, Calif., May 2, 2026 /PRNewswire/ -- Moreh, an AI infrastructure software company, led by CEO Gangwon Jo, announced that it has successfully validated LLM inference performance on the Tenstorrent Galaxy Wormhole system using its proprietary 'MoAI Inference Framework.'

Based on tests across leading Mixture-of-Experts (MoE) models—including GPT-OSS, Qwen, GLM, and DeepSeek—Moreh achieved LLM inference performance on Tenstorrent Galaxy Wormhole matching or surpassing NVIDIA DGX A100-class systems, demonstrating a compelling alternative to conventional GPU-centric AI infrastructure.

Moreh also improved cost efficiency by implementing a disaggregated serving architecture that combines GPUs with Tenstorrent Wormhole chips. By utilizing Tenstorrent processors as dedicated prefill accelerators, the company reduced reliance on high-cost HBM and lowered overall infrastructure costs.

The results were first unveiled at Tenstorrent's launch event, TT-Deploy, held on May 1 in San Francisco.

As a strategic partner of Tenstorrent and a major external contributor to Metalium, Moreh showcased a live LLM inference demo at the event. Building on its experience operating AMD GPU-based production environments in real-world data centers, the company presented its latest technical achievements in 'Production-Ready LLM Inference on Tenstorrent Galaxy.'

MoAI Inference Framework is a disaggregated inference solution that enables unified operation of heterogeneous GPUs and NPUs—including NVIDIA, AMD, and Tenstorrent—within a single cluster. This allows enterprises to build flexible AI infrastructure strategies without vendor lock-in.

Moreh CEO Gangwon Jo stated, "Achieving production-grade LLM inference performance and stability on Tenstorrent-based systems marks a significant milestone," and added, "We will continue to enhance performance through deeper optimization across heterogeneous architectures and closer integration with Tenstorrent NPUs."

Moreh is developing its own core AI infrastructure engine and, through its foundation LLM subsidiary Motif Technologies, is building end-to-end capabilities spanning both infrastructure and model domains. Simultaneously, the company is making its mark in the global market through collaborations with key partners such as AMD, Tenstorrent, and SGLang.

** This press release is distributed by PR Newswire through automated distribution system, for which the client assumes full responsibility. **

MOREH Demonstrates Production-Ready LLM Inference on Tenstorrent Galaxy, Achieving DGX A100-Class Performance with Improved Cost Efficiency

MOREH Demonstrates Production-Ready LLM Inference on Tenstorrent Galaxy, Achieving DGX A100-Class Performance with Improved Cost Efficiency

Recommended Articles