SUNNYVALE, Calif. & CAMBRIDGE, Mass.–(BUSINESS WIRE)–Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaboration for sparse training and deployment of large language models (LLMs). Achieving an unprecedented 70% parameter reduction with full accuracy recovery, training on Cerebras CS-3 systems and deploying on Neural Magic inference server solutions enables significantly faster, more efficient, and lower cost LLMs, making them accessible to a broader range of organizations and industries.
“Together with Cerebras and their purpose-built AI hardware, we created sparse, foundational models that deliver lightning-fast inference through our sparsity-aware software platform”
“For the first time ever, we achieved up to 70% sparsity for a foundational model, such as Llama, with full accuracy recovery for challenging downstream tasks,” said Sean Lie, CTO and co-founder of Cerebras. “This breakthrough enables scalable training and accelerated inference – our CS-3 system provides near theoretical acceleration for training sparse LLMs, and Neural Magic’s inference server, DeepSparse, delivers up to 8.6x faster inference than dense, baseline models.”
With native hardware support for unstructured sparsity, the Cerebras CS-3 system accelerates training for 70% and higher sparse models – far exceeding the yet unrealized peak on GPUs like H100 and B100. This is because GPU sparsity is limited and rigid – with only 50% support using a fixed ratio. With the CS-3 system, purpose-built for sparse models with the industry’s highest memory bandwidth, AI practitioners can employ novel techniques from Neural Magic, such as sparse pretraining and sparse fine-tuning to their datasets, to create highly sparse LLMs without sacrificing accuracy. The results are faster, smaller models which retain the full accuracy of their slower, dense counterparts.
“Together with Cerebras and their purpose-built AI hardware, we created sparse, foundational models that deliver lightning-fast inference through our sparsity-aware software platform,” said Mark Kurtz, CTO of Neural Magic. “This paradigm shift provides enterprises and researchers alike with much more efficient, cost-effective, and accessible deployment of LLMs across a wide range of industries and real-world applications.”
To facilitate the adoption and further development of sparse LLMs, Cerebras and Neural Magic have released the models, recipes, implementations, and documentation of this sparsity breakthrough. For more information, please visit https://neuralmagic.com/blog/unlocking-affordable-and-sustainable-ai-through-sparse-foundational-llms/.
About Cerebras Systems
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit https://www.cerebras.net.
About Neural Magic
Neural Magic accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As a software-delivered solution, Neural Magic optimizes open-source models, like large language models, to run efficiently on commodity hardware. Organizations can spend less to advance AI initiatives to production, without sacrificing performance and accuracy with their models. Founded by a MIT professor and an AI research scientist, challenged by the constraints of existing hardware, Neural Magic enables a future where developers and IT can tap into the power of state-of-the-art, open-source AI with none of the friction.
Contacts
ZM Communications
Email: pr@zmcommunications.com
Related Posts
June 12, 2024
Cerebras Enables Faster Training of Industry’s Leading Largest AI Models
Collaboration with Dell Technologies Expands Cerebras’ AI Solutions and ML…
May 15, 2024
Cerebras Wafer Scale Engine Outperforms World’s #1 Supercomputer, Achieving Long-Timescale Molecular Dynamics Simulations 179x Faster
Breakthrough Unlocks Millisecond-Scale Simulations for the First Time, Enabling…
May 15, 2024
Aleph Alpha Selects Cerebras to Build Next-Gen Sovereign AI Models
Aleph Alpha to Leverage Cerebras AI Expertise and CS-3 AI Supercomputers to…