pageping
menu
Enter your Support Code

Get Social

As the processing power required by AI learning increases, as does the reliance on a network of processing chips. Therefore, California-based deep learning researcher Cerebras has developed the largest computer chip ever built – and it’s bigger than an iPad!

Artificial Intelligence programmes are usually trained using thousands of calculations and comparisons to real-world scenarios, depending on their purpose. This intensity of operations needs a lot of processing power, which is usually the work of a series of Graphics Processing Units. GPUs can be used together to create an increased processing capability, but Cerebras’ new innovation, known as the Wafer Scale Engine is like nothing that has ever been seen before.

Sean Lie, co-founder of Cerebras, holding the world's largest computer chip - Wafer Scale Engine. SeaBro IT
Cerebras co-founder Sean Lie holding the Wafer Scale Engine. Image: Cerebras Systems

Chips which power phones and computers are made by taking a full-sized silicon wafer and carving them into individual chips, then use lithography (printing using light) to produce the connectors on the chip. However, Cerebra co-founder Sean Lie and his team decided to use a full silicon wafer to produce their chip.

This means that this chip is 46,225 square millimetres in area, 56x the size of the largest GPU, and contains 1.2 trillion transistors and 400,000 AI-optimised cores. In comparison, the largest GPU contains 21.1 billion transistors. With this technology, AI processes that would otherwise have taken weeks could now be completed within a day.

Using such a large chip creates an issue with quality control. Because of the intricacies of lithography, there is often the potential for failed chips. Because of this, Cerebras have included 1% extra cores which are only in use as a backup in case another fails.

Cerebras Wafer Scale Engine chip in development laboratory. SeaBro IT
The Wafer Scale Engine chip underwent intensive testing during development. Image: Cerebras Systems

This huge chip comes with a range of implementation issues. Being almost the size of a laptop, it cannot simply be inserted into a datacentre and left to process the data. During the development of this future-thinking project, the team have had to build specialist tools and connectors to be able to use the device. They plan to ship the peripheral as a whole system, which includes a revolutionary vertical cooling technology, though no price has yet been released. A very small number of select customers have already received the product and are testing it in their AI development setups.

Although it may be some time before these huge data chips are in common use, this technology shows that the processing power required for AI operations is increasing exponentially, and technology needs to move quickly to keep up.

Read more of our blog posts 


      

Aug

Written by
Fraser S
in News. Staff Articles.

Signup to our mailing list for tips and latest offers