Skip to Content Facebook Feature Image

Cisco Sets Benchmark with Industry's Most Scalable, Efficient 51.2T Routing Systems for Distributed AI Workloads

Business

Cisco Sets Benchmark with Industry's Most Scalable, Efficient 51.2T Routing Systems for Distributed AI Workloads
Business

Business

Cisco Sets Benchmark with Industry's Most Scalable, Efficient 51.2T Routing Systems for Distributed AI Workloads

2025-10-08 21:00 Last Updated At:21:15

Powered by the introduction of the new Silicon One P200 chip, Cisco's groundbreaking 8223 routing systems redefine secure, efficient AI networking - enabling seamless 'scale-across' architectures that connect AI clusters across multiple data centers.

News Summary:

  • Cisco's new AI networking systems redefine what's possible with unprecedented scalability, power efficiency, and programmability built to directly address the critical challenges of connecting multiple data centers to securely run AI workloads.
  • The 8223 routing systems deliver industry-leading capacity and efficiency in a single ASIC router, and are now shipping to initial hyperscalers for secure, scalable AI infrastructure.
  • Cisco Silicon One is powering the next generation of AI networking with the new P200 chip, delivering deep-buffer routing silicon and enabling interconnect bandwidth scale of over 3 Exabits per second.

SAN JOSE, Calif., Oct. 8, 2025 /PRNewswire/ -- Today, Cisco (NASDAQ: CSCO) unveiled the Cisco 8223, the industry's most optimized routing system for efficiently and securely connecting data centers and powering the next generation of artificial intelligence (AI) workloads. As AI adoption accelerates, data centers face soaring demand, rising power constraints, and evolving security threats. The Cisco 8223 rises to the challenge as the only 51.2 terabits per second (Tbps) Ethernet fixed router built for the intense traffic of AI workloads between data centers.  Cisco today also announced its latest Silicon One innovation – the P200 chip – which sits at the core of the 8223. Together, these innovations empower organizations to shatter bottlenecks and future-proof their infrastructure for the AI era.

"AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart," said Martin Lund, EVP, Cisco's Common Hardware Group. "With the Cisco 8223, powered by the new Cisco Silicon One P200, we're delivering the massive bandwidth, scale and security needed for distributed data center architectures."

In many of the world's data centers, AI workloads are stretching power and space limitations. Hyperscalers can't scale-up (add more capacity into each individual system) or scale-out (connect multiple systems within a data center) any further. This dynamic puts increasing demand on data center interconnects, as the industry must "scale-across" by distributing AI workloads across multiple data centers. Without addressing the connection points between data centers, organizations could face performance challenges, capacity bottlenecks and suboptimal processing that wastes time, power and money. The Cisco 8223 system gives organizations the flexibility and programmability necessary to build these networks, with deep buffering to provide the cross-site security and reliability necessary for crucial workloads.

Power Efficient. Scalable. Programmable. Secure.
Building a future-ready backbone AI network today is essential – one that can meet the power consumption challenges while remaining scalable, flexible and secure. The Cisco 8223 provides customers the capacity to handle surging workloads, the necessary flexibility with fully programmability, and the power efficiency to directly address power consumption challenges.

The 8223 is uniquely:

  • Power Efficient: The 8223 is the most power efficient routing system for scale-across networking. It is a deep-buffer routing solution optimized for fixed deployments that offers switch-like power efficiency, directly addressing the high energy demands of AI workloads. As a 3RU system, it is the most space efficient system of its kind. As AI clusters continue to 'scale-across,' power and space efficiency will only grow in importance.
  • Scalable without Compromise: The 8223 offers industry-leading bandwidth and the highest density of any fixed routing system in the industry. The only fixed routing system featuring 64 ports of 800G, the 8223 offers unmatched routing performance capable of processing over 20 billion packets per second and scaling up to 3 Exabits per second. It also features 800G coherent optics support, enabling data center interconnect and metro applications reaching up to 1000km. With the P200's deep buffering capabilities, the new routing systems can absorb massive traffic surges from AI training, maintaining performance and preventing network slowdowns.
  • Intelligent and Adaptable: The 8223 can intelligently adapt to real-time network conditions. With the smart and programmable P200 silicon, it can support new, emerging network protocols and standards without requiring costly hardware upgrades. Networks can remain agile while preventing performance bottlenecks and accelerating the adoption of new features as AI traffic continues to evolve.
  • Secure: The 8223 offers protection at all levels – across hardware, software and entire networks. With features like line-rate encryption using post-quantum resilient algorithms, integrated security safeguards and continuous monitoring tools, the 8223 can safeguard against emerging threats. Seamless integration into Cisco's observability platforms gives customers granular insights into network performance to help identify and resolve issues quickly, ensuring AI data traffic is secure and reliable.

Flexibility – An AI 'Must Have'
The demands of AI networking aren't simply growing, but are also constantly changing and evolving. Organizations demand infrastructure that is agile enough to change as business requirements do, and are seeking flexibility in deployment models so they can build networks for their exact needs. The Cisco 8223 will initially be available for open-source SONiC deployments, with IOS XR on the horizon.

In addition to its availability in the fixed 8223 system, the P200 silicon itself will be deployable in modular platforms and disaggregated chassis, offering customers architectural consistency for any size network. The Cisco Nexus portfolio will also support systems running NX-OS based on the P200 in the near future. Cisco is delivering the agility and operational flexibility required for today's AI infrastructure.

Cisco Silicon One: The Industry's Most Scalable and Programmable Unified Networking Architecture
Cisco Silicon One is a complete portfolio of networking devices across AI, hyperscaler, data center, enterprise and service provider use cases. Introduced in 2019, Cisco Silicon One is playing critical roles in major networks around the world.

Industry Reaction
"The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts.  We're pleased to see the P200 providing innovation and more options in this space.  Microsoft was an early adopter of Silicon One, and the common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments." - Dave Maltz, Technical Fellow and Corporate Vice President, Azure Networking, Microsoft

"As Alibaba continues to invest in and expand the cloud infrastructure, DCI is a critical pillar of our strategy. We are pleased to see the launch of Cisco Silicon One P200, the industry's first 51.2T routing ASIC that delivers high bandwidth, lower power consumption, and full P4 programmability. This breakthrough chip aligns perfectly with the evolution of Alibaba's eCore architecture. We plan to leverage the P200 to build a single chip platform, serving as a foundational building block for expanding our eCore deployment. Beyond supporting our Cisco Silicon One Q200 deployment scenarios, this new routing chip will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices. This transition will significantly enhance the stability, reliability, and scalability of our DCI network while keeping the simplicity. In addition, we are developing and exploring innovative disaggregated architectures using Cisco G200 for our high-performance datacenter network. The introduction of this advanced routing chip marks a pivotal step forward, empowering Alibaba to accelerate innovation and drive infrastructure expansion in the AI era." - Dennis Cai, Vice President, Head of Network Infrastructure, Alibaba Cloud

"As a long-standing Cisco customer, Lumen is actively advancing our network infrastructure to support the AI-driven economy. Cisco's 8000 Series, Cisco Silicon One and Cisco's pluggable optic technology represent key innovations that align with our goals for scalable, efficient multi-cloud connectivity. As we look to the future, we are exploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers." - Dave Ward, Chief Technology Officer and Product Officer, Lumen

"As AI workloads rapidly outpace the capabilities of traditional data centers, the industry faces new challenges in bandwidth, reliability, and scale. The migration of data centers to remote locations for power access makes ultra-reliable, high-bandwidth interconnects essential. Cisco's 8223, powered by Silicon One P200, marks a significant step forward, delivering the industry's first 51.2-terabit fixed Ethernet router purpose-built for secure, power efficient scale-across networking." - Patrick Moorhead, CEO and Chief Analyst for Moor Insights & Strategy

Additional Resources:

About Cisco
Cisco (NASDAQ: CSCO) is the worldwide technology leader that is revolutionizing the way organizations connect and protect in the AI era. For more than 40 years, Cisco has securely connected the world. With its industry leading AI-powered solutions and services, Cisco enables its customers, partners and communities to unlock innovation, enhance productivity and strengthen digital resilience. With purpose at its core, Cisco remains committed to creating a more connected and inclusive future for all. Discover more on The Newsroom and follow us on X at @Cisco.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word 'partner' does not imply a partnership relationship between Cisco and any other company.

Disclaimer: Many of the products and features mentioned are still in development and will be made available as they are finalized, subject to ongoing evolution in development and innovation. The timeline for their release is subject to change.

 

Powered by the introduction of the new Silicon One P200 chip, Cisco's groundbreaking 8223 routing systems redefine secure, efficient AI networking - enabling seamless 'scale-across' architectures that connect AI clusters across multiple data centers.

News Summary:

  • Cisco's new AI networking systems redefine what's possible with unprecedented scalability, power efficiency, and programmability built to directly address the critical challenges of connecting multiple data centers to securely run AI workloads.
  • The 8223 routing systems deliver industry-leading capacity and efficiency in a single ASIC router, and are now shipping to initial hyperscalers for secure, scalable AI infrastructure.
  • Cisco Silicon One is powering the next generation of AI networking with the new P200 chip, delivering deep-buffer routing silicon and enabling interconnect bandwidth scale of over 3 Exabits per second.

SAN JOSE, Calif., Oct. 8, 2025 /PRNewswire/ -- Today, Cisco (NASDAQ: CSCO) unveiled the Cisco 8223, the industry's most optimized routing system for efficiently and securely connecting data centers and powering the next generation of artificial intelligence (AI) workloads. As AI adoption accelerates, data centers face soaring demand, rising power constraints, and evolving security threats. The Cisco 8223 rises to the challenge as the only 51.2 terabits per second (Tbps) Ethernet fixed router built for the intense traffic of AI workloads between data centers.  Cisco today also announced its latest Silicon One innovation – the P200 chip – which sits at the core of the 8223. Together, these innovations empower organizations to shatter bottlenecks and future-proof their infrastructure for the AI era.

"AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart," said Martin Lund, EVP, Cisco's Common Hardware Group. "With the Cisco 8223, powered by the new Cisco Silicon One P200, we're delivering the massive bandwidth, scale and security needed for distributed data center architectures."

In many of the world's data centers, AI workloads are stretching power and space limitations. Hyperscalers can't scale-up (add more capacity into each individual system) or scale-out (connect multiple systems within a data center) any further. This dynamic puts increasing demand on data center interconnects, as the industry must "scale-across" by distributing AI workloads across multiple data centers. Without addressing the connection points between data centers, organizations could face performance challenges, capacity bottlenecks and suboptimal processing that wastes time, power and money. The Cisco 8223 system gives organizations the flexibility and programmability necessary to build these networks, with deep buffering to provide the cross-site security and reliability necessary for crucial workloads.

Power Efficient. Scalable. Programmable. Secure.
Building a future-ready backbone AI network today is essential – one that can meet the power consumption challenges while remaining scalable, flexible and secure. The Cisco 8223 provides customers the capacity to handle surging workloads, the necessary flexibility with fully programmability, and the power efficiency to directly address power consumption challenges.

The 8223 is uniquely:

  • Power Efficient: The 8223 is the most power efficient routing system for scale-across networking. It is a deep-buffer routing solution optimized for fixed deployments that offers switch-like power efficiency, directly addressing the high energy demands of AI workloads. As a 3RU system, it is the most space efficient system of its kind. As AI clusters continue to 'scale-across,' power and space efficiency will only grow in importance.
  • Scalable without Compromise: The 8223 offers industry-leading bandwidth and the highest density of any fixed routing system in the industry. The only fixed routing system featuring 64 ports of 800G, the 8223 offers unmatched routing performance capable of processing over 20 billion packets per second and scaling up to 3 Exabits per second. It also features 800G coherent optics support, enabling data center interconnect and metro applications reaching up to 1000km. With the P200's deep buffering capabilities, the new routing systems can absorb massive traffic surges from AI training, maintaining performance and preventing network slowdowns.
  • Intelligent and Adaptable: The 8223 can intelligently adapt to real-time network conditions. With the smart and programmable P200 silicon, it can support new, emerging network protocols and standards without requiring costly hardware upgrades. Networks can remain agile while preventing performance bottlenecks and accelerating the adoption of new features as AI traffic continues to evolve.
  • Secure: The 8223 offers protection at all levels – across hardware, software and entire networks. With features like line-rate encryption using post-quantum resilient algorithms, integrated security safeguards and continuous monitoring tools, the 8223 can safeguard against emerging threats. Seamless integration into Cisco's observability platforms gives customers granular insights into network performance to help identify and resolve issues quickly, ensuring AI data traffic is secure and reliable.

Flexibility – An AI 'Must Have'
The demands of AI networking aren't simply growing, but are also constantly changing and evolving. Organizations demand infrastructure that is agile enough to change as business requirements do, and are seeking flexibility in deployment models so they can build networks for their exact needs. The Cisco 8223 will initially be available for open-source SONiC deployments, with IOS XR on the horizon.

In addition to its availability in the fixed 8223 system, the P200 silicon itself will be deployable in modular platforms and disaggregated chassis, offering customers architectural consistency for any size network. The Cisco Nexus portfolio will also support systems running NX-OS based on the P200 in the near future. Cisco is delivering the agility and operational flexibility required for today's AI infrastructure.

Cisco Silicon One: The Industry's Most Scalable and Programmable Unified Networking Architecture
Cisco Silicon One is a complete portfolio of networking devices across AI, hyperscaler, data center, enterprise and service provider use cases. Introduced in 2019, Cisco Silicon One is playing critical roles in major networks around the world.

Industry Reaction
"The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts.  We're pleased to see the P200 providing innovation and more options in this space.  Microsoft was an early adopter of Silicon One, and the common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments." - Dave Maltz, Technical Fellow and Corporate Vice President, Azure Networking, Microsoft

"As Alibaba continues to invest in and expand the cloud infrastructure, DCI is a critical pillar of our strategy. We are pleased to see the launch of Cisco Silicon One P200, the industry's first 51.2T routing ASIC that delivers high bandwidth, lower power consumption, and full P4 programmability. This breakthrough chip aligns perfectly with the evolution of Alibaba's eCore architecture. We plan to leverage the P200 to build a single chip platform, serving as a foundational building block for expanding our eCore deployment. Beyond supporting our Cisco Silicon One Q200 deployment scenarios, this new routing chip will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices. This transition will significantly enhance the stability, reliability, and scalability of our DCI network while keeping the simplicity. In addition, we are developing and exploring innovative disaggregated architectures using Cisco G200 for our high-performance datacenter network. The introduction of this advanced routing chip marks a pivotal step forward, empowering Alibaba to accelerate innovation and drive infrastructure expansion in the AI era." - Dennis Cai, Vice President, Head of Network Infrastructure, Alibaba Cloud

"As a long-standing Cisco customer, Lumen is actively advancing our network infrastructure to support the AI-driven economy. Cisco's 8000 Series, Cisco Silicon One and Cisco's pluggable optic technology represent key innovations that align with our goals for scalable, efficient multi-cloud connectivity. As we look to the future, we are exploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers." - Dave Ward, Chief Technology Officer and Product Officer, Lumen

"As AI workloads rapidly outpace the capabilities of traditional data centers, the industry faces new challenges in bandwidth, reliability, and scale. The migration of data centers to remote locations for power access makes ultra-reliable, high-bandwidth interconnects essential. Cisco's 8223, powered by Silicon One P200, marks a significant step forward, delivering the industry's first 51.2-terabit fixed Ethernet router purpose-built for secure, power efficient scale-across networking." - Patrick Moorhead, CEO and Chief Analyst for Moor Insights & Strategy

Additional Resources:

About Cisco
Cisco (NASDAQ: CSCO) is the worldwide technology leader that is revolutionizing the way organizations connect and protect in the AI era. For more than 40 years, Cisco has securely connected the world. With its industry leading AI-powered solutions and services, Cisco enables its customers, partners and communities to unlock innovation, enhance productivity and strengthen digital resilience. With purpose at its core, Cisco remains committed to creating a more connected and inclusive future for all. Discover more on The Newsroom and follow us on X at @Cisco.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word 'partner' does not imply a partnership relationship between Cisco and any other company.

Disclaimer: Many of the products and features mentioned are still in development and will be made available as they are finalized, subject to ongoing evolution in development and innovation. The timeline for their release is subject to change.

 

** The press release content is from PR Newswire. Bastille Post is not involved in its creation. **

Cisco Sets Benchmark with Industry's Most Scalable, Efficient 51.2T Routing Systems for Distributed AI Workloads

Cisco Sets Benchmark with Industry's Most Scalable, Efficient 51.2T Routing Systems for Distributed AI Workloads

SAN MATEO, Calif., Dec. 13, 2025 /PRNewswire/ -- AI infrastructure company EverMind has recently released EverMemOS, an open-source Memory Operating System designed to address one of artificial intelligence's most profound challenges: equipping machines with scalable, long-term memory.

The Memory Bottleneck

For years, large language models (LLMs) have been constrained by fixed context windows, a limitation that causes "forgetfulness" in long-term tasks. This results in broken context, factual inconsistencies, and an inability to deliver deep personalization or maintain knowledge coherence. The issue extends beyond technical hurdles; it represents an evolutionary bottleneck for AI. An entity without memory cannot exhibit behavioral consistency or initiative, let alone achieve self-evolution. Personalization, consistency, and proactivity, which are considered the hallmarks of intelligence, all depend on a robust memory system.

There is a consensus that memory is becoming the core competitive edge and defining boundary of future AI. Yet existing solutions, such as Retrieval-Augmented Generation (RAG) and fragmented memory systems, remain limited in scope, failing to support both 1-on-1 companion use cases and complex multi-agent enterprise collaboration. Few meet the standard of precision, speed, usability, and adaptability required for widespread adoption. Equipping large models with a high-performance, pluggable memory module remains a core unmet demand across AI applications.

Discoverative Intelligence

"Discoverative Intelligence" is a concept proposed in late 2025 by entrepreneur and philanthropist Chen Tianqiao. Unlike generative AI, which mimics human output by processing existing data, Discoverative Intelligence describes an advanced AI form that actively asks questions, forms testable hypotheses, and discovers new scientific principles. It prioritizes understanding causality and underlying principles over statistical patterns, a shift Chen argues is essential to achieving Artificial General Intelligence (AGI).

Chen contrasted two dominant AI development paths: the "Scaling Path," which relies on expanding parameters, data, and compute power to extrapolate within a search space, and the "Structural Path," which focuses on the "cognitive anatomy" of intelligence and how systems operate over time.

Discoverative Intelligence falls into the latter category, built on a brain-inspired model called Structured Temporal Intelligence (STI) that requires five core capabilities in a closed loop: neural dynamics (sustained, self-organizing activity to keep systems "alive"), long-term memory (storing and selectively forgetting experiences to build knowledge), causal reasoning (inferring "why" events occur), world modeling (an internal simulation of reality for prediction), and metacognition & intrinsic motivation (curiosity-driven exploration, not just external rewards).

Among these capabilities, long-term memory serves as the vital link between time and intelligence, highlighting its indispensable role in the path toward achieving true AGI.

EverMind's Answer

EverMemOS is EverMind's answer to this need: an open-source Memory Operating System designed as foundational technology for Discoverative Intelligence. Inspired by the hierarchical organization of the human memory system, EverMemOS features a four-layer architecture analogous to key brain regions: an Agentic Layer (task planning, mirroring the prefrontal cortex), a Memory Layer (long-term storage, like cortical networks), an Index Layer (associative retrieval, drawing from the hippocampus), and an API/MCP Interface Layer (external integration, serving as AI's "sensory interface").

The system delivers breakthroughs in both scenario coverage and technical performance. It is the first memory system capable of supporting both 1-on-1 conversation use cases and complex multi-agent enterprise collaboration. On technical benchmarks, EverMemOS achieved 92.3% accuracy on LoCoMo (a long-context memory evaluation) and 82% on LongMemEval-S (a suite for assessing long-term memory retention), significantly surpassing prior state-of-the-art results and setting a new industry standard.

The open-source version of EverMemOS is now available on GitHub, with a cloud service version to be launched late this year. The dual-track model, combining open collaboration with managed cloud services, aims to drive industry-wide evolution in long-term memory technology, inviting developers, enterprises, and researchers to contribute to and benefit from the system.

About EverMind

EverMind is redefining the future of AI by solving one of its most fundamental limitations: long-term memory. Its flagship platform, EverMemOS, introduces a breakthrough architecture for scalable and customizable memory systems, enabling AI to operate with extended context, maintain behavioral consistency, and improve through continuous interaction.

To learn more about EverMind and EverMemOS, please visit:

Website: https://evermind.ai/
GitHub: https://github.com/EverMind-AI/EverMemOS
X: https://x.com/EverMindAI
Reddit: https://www.reddit.com/r/EverMindAI/ 

** The press release content is from PR Newswire. Bastille Post is not involved in its creation. **

AI Infrastructure Company EverMind Released EverMemOS, Responding to Profound Challenges in AI

AI Infrastructure Company EverMind Released EverMemOS, Responding to Profound Challenges in AI

Recommended Articles