‘The fastest AI chip in the world’: Gigantic AI CPU has almost one million cores — Cerebras has Nvidia firmily in its sights as it unveils the WSE-3, a chip that can train AI models with 24 trillion parameters

Cerebras Systems has unveiled its Wafer Scale Engine 3 (WSE-3), dubbed the “fastest AI chip in the world.” 

The WSE-3, which powers the Cerebras CS-3 AI supercomputer, reportedly offers twice the performance of its predecessor, the WSE-2, at the same power consumption and price.

The chip is capable of training AI models with up to 24 trillion parameters, a significant leap from previous models.

CS-3 supercomputer

The WSE-3 is built on a 5nm TSMC process and features 44GB on-chip SRAM. It boasts four trillion transistors and 900,000 AI-optimized compute cores, delivering a peak AI performance of 125 petaflops – that’s the theoretical equivalent to about 62 Nvidia H100 GPUs.

The CS-3 supercomputer, powered by the WSE-3, is designed to train next-generation AI models that are 10 times larger than GPT-4 and Gemini. With a memory system of up to 1.2 petabytes, it can reportedly store 24 trillion parameter models in a single logical memory space, simplifying training workflow and boosting developer productivity.

Cerebras says its CS-3 supercomputer is optimized for both enterprise and hyperscale needs, and it offers superior power efficiency and software simplicity, requiring 97% less code than GPUs for large language models (LLMs).

Cerebras CEO and co-founder, Andrew Feldman, said, “WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI work, from mixture of experts to 24 trillion parameter models. We are thrilled for bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.” 

The company says it already has a backlog of orders for the CS-3 across enterprise, government, and international clouds. The CS-3 will also play a significant role in the strategic partnership between Cerebras and G42, which has already delivered 8 exaFLOPs of AI supercomputer performance via Condor Galaxy 1 and 2. A third installation, Condor Galaxy 3, is currently under construction and will be built with 64 CS-3 systems, producing 8 exaFLOPs of AI compute.

More from TechRadar Pro

Source link

Technology

Thinking About Boomi for Healthcare? Here’s What You Should Know

[ad_1] It’s no secret that disconnected systems slow everything down. From patient care to payroll, integration challenges aren’t rare tech issues—they’re everyday problems that can impact productivity, revenue, and even patient outcomes. That’s why healthcare organizations are increasingly turning to integration platforms like Boomi to create a digital ecosystem where data flows seamlessly between clinical, […]

Read More
Technology

Study Smarter, Not Harder: The Science of Audio-Based Revision

[ad_1] The Science Behind Audio-Based Learning Do you ever feel like you’re studying for hours but remembering very little? You’re not alone. Many students struggle with inefficient revision techniques that waste time and yield poor results. What if there was a better way? Enter audio-based revision — a smart, science-backed method that helps you absorb […]

Read More
Technology

a16z generative ai

Hippocratic AI raises $141M to staff hospitals with clinical AI agents Story Partners with Stability AI to Empower Open-Source Innovation for Creators and Developers Meanwhile, Kristina Dulaney, RN, PMH-C, the founder of Cherished Mom, an organization dedicated to solving maternal mental health challenges, helped to create an AI agent that’s focused on helping new mothers […]

Read More