Natural Language Processing
NLP enables computers to process and analyze large amounts of natural language data, or data that has similar characteristics, such as genomes. The goal is to create a model that “understands” the contents of documents, including the contextual nuances of the language within them.
The Challenge
Advances in NLP have made two things are very clear: First, the way to achieve better results is to use bigger, more complex models, trained using massive amounts of data. The size of models is growing at an exponential rate, outpacing Moore’s Law by more than an order of magnitude.
The problem is that even using clusters of hundreds or even thousands of GPUs, even a single training run takes weeks, costs millions of dollars and massive amounts of energy.
The result? Only a few major companies are able to create these models, which stifles innovation, experimentation and creativity.
Cerebras Advantage
With raw speed, seamless scaling to massive models, ease of programming and performance boosts from 50,000 token sequence lengths and automatic sparsity acceleration, Cerebras makes the benefits of massive-scale NLP available to anyone.
With 850,000 AI-optimized cores on our very wafer-scale processor, it’s no surprise that the Cerebras outperforms small chips by orders of magnitude.
What’s startling is that Cerebras Wafer-Scale Clusters deliver near-linear scaling from one to hundreds of nodes.
And our architecture enables us to handle the hottest AI technologies without clumsy workarounds.
Computer Vision
Computer Vision enables systems to autonomously identify and catalogue objects in an environment. This is done by analyzing pixels within images – identifying color, shape, and texture – to create an understanding of the scene.
The Challenge
The industry wants to use very large images, at video frame rates. The problem is, you can’t do this on existing, small-chip accelerators. NVIDIA and others are limited by off-chip DRAM, both bandwidth and memory. A modern GPU with 40GB of slow, off-chip DRAM can fit only 2 megapixel images with UNet18. UNet28 fits even smaller image sizes.
There’s no way to train on larger images today without software tricks that severely compromise performance.
Cerebras Advantage
Thanks to the sheer size of the Cerebras Wafer-Scale Engine, amplified by our unique weight-streaming technology, we can perform segmentation and classification on enormous, 25 megapixel images without breaking a sweat.
High-Performance Computing
HPC applications such as simulation and modelling are traditionally performed using very large clusters of conventional servers working together. The Cerebras architecture takes a completely different approach that makes possible revolutionary use cases that are intractable on any other available hardware platform.
The Challenge
There is a growing realization in the high-performance computing (HPC) field that the traditional scale-out approach – “node-level heterogeneity” – of hundreds or thousands of identical compute nodes loaded up with GPUs or other accelerators has limitations.
The efficiency of algorithms tends to decrease as they are split, or “sharded” across many nodes, because moving data between those nodes is such a slow process. Writing code for massively parallel systems is a specialized skill and is very time consuming.
Cerebras Advantage
The Cerebras architecture can help to accelerate sparse linear algebra and tensor workloads, stencil-based partial differential equation (PDE) solvers, N-body problems, and spectral algorithms such as FFT that are often used for signal processing.
The Cerebras Software Development Kit (SDK) allows developers can target the WSE’s microarchitecture directly using a familiar C-like interface to create custom kernels for their own unique applications.