‘The fastest AI chip in the world’: Gigantic AI CPU has almost one million cores — Cerebras has Nvidia firmily in its sights as it unveils the WSE-3, a chip that can train AI models with 24 trillion parameters

Cerebras Systems has unveiled its Wafer Scale Engine 3 (WSE-3), dubbed the “fastest AI chip in the world.” 

The WSE-3, which powers the Cerebras CS-3 AI supercomputer, reportedly offers twice the performance of its predecessor, the WSE-2, at the same power consumption and price.

The chip is capable of training AI models with up to 24 trillion parameters, a significant leap from previous models.

CS-3 supercomputer

The WSE-3 is built on a 5nm TSMC process and features 44GB on-chip SRAM. It boasts four trillion transistors and 900,000 AI-optimized compute cores, delivering a peak AI performance of 125 petaflops – that’s the theoretical equivalent to about 62 Nvidia H100 GPUs.

The CS-3 supercomputer, powered by the WSE-3, is designed to train next-generation AI models that are 10 times larger than GPT-4 and Gemini. With a memory system of up to 1.2 petabytes, it can reportedly store 24 trillion parameter models in a single logical memory space, simplifying training workflow and boosting developer productivity.

Cerebras says its CS-3 supercomputer is optimized for both enterprise and hyperscale needs, and it offers superior power efficiency and software simplicity, requiring 97% less code than GPUs for large language models (LLMs).

Cerebras CEO and co-founder, Andrew Feldman, said, “WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI work, from mixture of experts to 24 trillion parameter models. We are thrilled for bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.” 

The company says it already has a backlog of orders for the CS-3 across enterprise, government, and international clouds. The CS-3 will also play a significant role in the strategic partnership between Cerebras and G42, which has already delivered 8 exaFLOPs of AI supercomputer performance via Condor Galaxy 1 and 2. A third installation, Condor Galaxy 3, is currently under construction and will be built with 64 CS-3 systems, producing 8 exaFLOPs of AI compute.

More from TechRadar Pro

Source link

Technology

gaitQ and machineMD secure million dollar research grant to monitor Parkinson’s development in UK and Switzerland

Oxford-based medical technology start-up gaitQ and Swiss medical device company machineMD have announced the joint award of a million dollar research grant from Innovate UK and Innosuisse to enable the collection and analysis of critical movement data from people with Parkinson’s (PwP). The grant will fund an 18-month research project that will record movement data […]

Read More
Technology

Take-Two plans to lay off 5 percent of its employees by the end of 2024

Take-Two Interactive plans to lay off 5 percent of its workforce, or about 600 employees, by the end of the year, as reported in an SEC filing Tuesday. The studio is also canceling several in-development projects. These moves are expected to cost $160 million to $200 million to implement, and should result in $165 million […]

Read More
Technology

10 tips to avoid planting AI timebombs in your organization

At the recent HIMSS Global Health Conference & Exhibition in Orlando, I delivered a talk focused on protecting against some of the pitfalls of artificial intelligence in healthcare. The objective was to encourage healthcare professionals to think deeply about the realities of AI transformation, while providing them with real-world examples of how to proceed safely […]

Read More