Rupal Hollenbeck, Formerly CMO at Cerebras Systems | June 17, 2021
Data scientists and AI researchers from the private and public sector gathered June 10th virtually at the EPCC to learn more about The Edinburgh International Data Facility, and specifically about the Cerebras Systems EPCC partnership. They listened to Professor Mark Parsons, Director of the EPCC, and Dr. Andy Hock, VP of Product at Cerebras Systems. Professor Parsons mentioned that the partnership is aimed squarely toward “establishing Edinburgh as the data capital of Europe.”
The projects we have seen so far are promising and diverse, representing the commercial and public sectors, as well as academic research. As Andy pointed out, “Our excitement is in the ability of this partnership to unlock new capabilities for researchers across Europe.”
The Requirements and Challenges of Deep Learning Projects
AI has transformative potential, but today’s infrastructure is not built for the requirements of deep learning. As Andy put it, “Today’s GPU solutions are suitable but not optimal for this work.” Indeed, the team at Cerebras Systems saw an opportunity to provide a transformative AI computing solution that could improve deep learning performance “not by a little bit but by a lot.”
The Benefits of Cerebras Systems to AI Research Projects
A single CS system provides the wall-clock compute performance of an entire cluster of GPUs made up of dozens to hundreds of individual processors. For organizations, this means faster insights at lower cost.
For the ML researcher, this means achieving cluster-scale performance with the programming ease of a single device. Researchers can accelerate state-of-the-art models without spending the days to weeks on the setup and tuning needed to run distributed training on large clusters of small devices.
Researchers can program the system using familiar ML frameworks like TensorFlow. During the event, Andy went on to explain that for researchers using PyTorch, support will be added in September. Once programmed, the Cerebras Graph Compiler handles everything else to automatically translate the user’s neural network graph into an optimized executable for the 400,000 cores of the CS-1 at EPCC.
Further, Andy shared that we are building a set of lower-level programming tools and APIs for custom kernel development for both deep learning and HPC applications. These tools allow users to extend Cerebras’ existing software stack to support custom or emerging applications.
Indeed, with Cerebras Systems you have access to cluster scale AI compute resources with the programming simplicity of a single node. Our system’s unique combination of enormous performance and single-node simplicity not only avoids parallel programming complexity, but also unlocks much faster time-to-solution, from research concept to model-in-production.
Conclusion
As Andy concluded, he described going “from research concept to model-in-production in 4 weeks vs 4 months on a GPU cluster. And this is really just beginning. With our 2nd generation system announced earlier this quarter we are delivering even more performance.”
We invite you to explore more ideas in less time and reduce the cost of curiosity at the EPCC’s International Data Facility today.
The system is available for new projects now: Click here
Related Posts
May 17, 2024
Cerebras Breaks Exascale Record for Molecular Dynamics Simulations
Cerebras has set a new record for molecular dynamics simulation speed that goes…
May 1, 2024
Supercharge your HPC Research with the Cerebras SDK
Cerebras SDK 1.1.0, our second publicly available release, includes initial…