weight streaming Archives - Cerebras https://www.cerebras.net/tag/weight-streaming/ Mon, 20 May 2024 15:07:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://www.cerebras.net/wp-content/uploads/2022/05/cropped-cerebras-logo-fav-32x32.png weight streaming Archives - Cerebras https://www.cerebras.net/tag/weight-streaming/ 32 32 MediSwift: Efficient Sparse Pre-trained Biomedical Language Models https://arxiv.org/abs/2403.00952#new_tab Mon, 20 May 2024 15:07:22 +0000 https://www.cerebras.net/?p=105485 Large language models (LLMs) are typically trained on general source data for various domains, but a recent surge in domain-specific LLMs has shown their potential to outperform general-purpose models in domain-specific tasks (e.g., biomedicine).…

The post MediSwift: Efficient Sparse Pre-trained Biomedical Language Models appeared first on Cerebras.

]]>
Breaking the Molecular Dynamics Timescale Barrier Using a Wafer-Scale System https://arxiv.org/abs/2405.07898#new_tab Thu, 16 May 2024 00:54:00 +0000 https://www.cerebras.net/?p=105479 Molecular dynamics (MD) simulations have transformed our understanding of the nanoscale, driving breakthroughs in materials science, computational chemistry, and several other fields, including biophysics and drug design.…

The post Breaking the Molecular Dynamics Timescale Barrier Using a Wafer-Scale System appeared first on Cerebras.

]]>
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment https://arxiv.org/abs/2405.03594#new_tab Thu, 16 May 2024 00:53:03 +0000 https://www.cerebras.net/?p=105478 Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks. We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs that achieve full accuracy recovery for fine-tuning tasks at up…

The post Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment appeared first on Cerebras.

]]>
Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware https://arxiv.org/abs/2311.01739#new_tab Mon, 13 Nov 2023 18:52:26 +0000 https://www.cerebras.net/?p=105021 The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras Wafer-Scale Engine 2 (WSE-2), features 40 GB of on-chip SRAM, making it a potentially attractive…

The post Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware appeared first on Cerebras.

]]>
Position Interpolation Improves ALiBi Extrapolation https://arxiv.org/abs/2310.13017##new_tab Wed, 08 Nov 2023 22:57:03 +0000 https://www.cerebras.net/?p=104980 Linear position interpolation helps pre-trained models using rotary position embeddings (RoPE) to extrapolate to longer sequence lengths. We propose using linear position interpolation to extend the extrapolation range of models using Attention with Linear Biases (ALiBi). We find position interpolation…

The post Position Interpolation Improves ALiBi Extrapolation appeared first on Cerebras.

]]>
Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems https://www.cerebras.net/publication/scaling-the-memory-wall-for-multi-dimensional-seismic-processing-with-algebraic-compression-on-cerebras-cs-2-systems Tue, 26 Sep 2023 23:42:19 +0000 https://www.cerebras.net/?p=104946

The post Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems appeared first on Cerebras.

]]>
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model https://arxiv.org/abs/2309.11568#new_tab Fri, 22 Sep 2023 17:28:00 +0000 https://www.cerebras.net/?p=104941 We introduce the Bittensor Language Model, called “BTLM-3B-8K”, a new state-of-the-art 3 billion parameter open-source language model. BTLM-3B-8K was trained on 627B tokens from the SlimPajama dataset with a mixture of 2,048 and 8,192 context lengths. BTLM-3B-8K outperforms all existing…

The post BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model appeared first on Cerebras.

]]>
Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models https://arxiv.org/abs/2308.16149#new_tab Thu, 31 Aug 2023 19:39:26 +0000 https://www.cerebras.net/?p=104914 We introduce Jais and Jais-chat, new state-of-the-art Arabic-centric foundation and instruction-tuned open generative large language models (LLMs). The models are based on the GPT-3 decoder-only architecture and are pretrained on a mixture of Arabic and English texts, including source code…

The post Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models appeared first on Cerebras.

]]>
Cerebras Architecture Deep Dive: First Look Inside the Hardware/Software Co-Design for Deep Learning https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/IEEE%20Micro%202023-03%20Hot%20Chips%2034%20Cerebras%20Architecture%20Deep%20Dive.pdf#new_tab Mon, 22 May 2023 20:15:11 +0000 https://www.cerebras.net/?p=104721 IEEE Micro Volume 34, Issue 3, focuses on papers from last year’s Hot Chips 34 conference.
This article describes the Cerebras architecture and how it is designed specifically with this purpose, from the ground up, as a wafer-sized chip to…

The post Cerebras Architecture Deep Dive: First Look Inside the Hardware/Software Co-Design for Deep Learning appeared first on Cerebras.

]]>
Deep Learning Programming at Scale https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/Whitepapers/Cerebras-Whitepaper-ProgrammingAtScale.pdf#new_tab Mon, 06 Jun 2022 18:27:06 +0000 https://cerebras.net/?p=103263 Deep learning has become one of the most important computational workloads of our generation, advancing applications across industries from healthcare to autonomous driving. But it is also profoundly computationally intensive. (Updated June 2022.)…

The post Deep Learning Programming at Scale appeared first on Cerebras.

]]>