chip Archives - Cerebras https://www.cerebras.net/tag/chip/ Mon, 20 May 2024 15:07:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://www.cerebras.net/wp-content/uploads/2022/05/cropped-cerebras-logo-fav-32x32.png chip Archives - Cerebras https://www.cerebras.net/tag/chip/ 32 32 MediSwift: Efficient Sparse Pre-trained Biomedical Language Models https://arxiv.org/abs/2403.00952#new_tab Mon, 20 May 2024 15:07:22 +0000 https://www.cerebras.net/?p=105485 Large language models (LLMs) are typically trained on general source data for various domains, but a recent surge in domain-specific LLMs has shown their potential to outperform general-purpose models in domain-specific tasks (e.g., biomedicine).…

The post MediSwift: Efficient Sparse Pre-trained Biomedical Language Models appeared first on Cerebras.

]]>
Breaking the Molecular Dynamics Timescale Barrier Using a Wafer-Scale System https://arxiv.org/abs/2405.07898#new_tab Thu, 16 May 2024 00:54:00 +0000 https://www.cerebras.net/?p=105479 Molecular dynamics (MD) simulations have transformed our understanding of the nanoscale, driving breakthroughs in materials science, computational chemistry, and several other fields, including biophysics and drug design.…

The post Breaking the Molecular Dynamics Timescale Barrier Using a Wafer-Scale System appeared first on Cerebras.

]]>
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment https://arxiv.org/abs/2405.03594#new_tab Thu, 16 May 2024 00:53:03 +0000 https://www.cerebras.net/?p=105478 Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks. We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs that achieve full accuracy recovery for fine-tuning tasks at up…

The post Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment appeared first on Cerebras.

]]>
Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware https://arxiv.org/abs/2311.01739#new_tab Mon, 13 Nov 2023 18:52:26 +0000 https://www.cerebras.net/?p=105021 The recent trend toward deep learning has led to the development of a variety of highly innovative AI accelerator architectures. One such architecture, the Cerebras Wafer-Scale Engine 2 (WSE-2), features 40 GB of on-chip SRAM, making it a potentially attractive…

The post Efficient Algorithms for Monte Carlo Particle Transport on AI Accelerator Hardware appeared first on Cerebras.

]]>
Position Interpolation Improves ALiBi Extrapolation https://arxiv.org/abs/2310.13017##new_tab Wed, 08 Nov 2023 22:57:03 +0000 https://www.cerebras.net/?p=104980 Linear position interpolation helps pre-trained models using rotary position embeddings (RoPE) to extrapolate to longer sequence lengths. We propose using linear position interpolation to extend the extrapolation range of models using Attention with Linear Biases (ALiBi). We find position interpolation…

The post Position Interpolation Improves ALiBi Extrapolation appeared first on Cerebras.

]]>
Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems https://www.cerebras.net/publication/scaling-the-memory-wall-for-multi-dimensional-seismic-processing-with-algebraic-compression-on-cerebras-cs-2-systems Tue, 26 Sep 2023 23:42:19 +0000 https://www.cerebras.net/?p=104946

The post Scaling the “Memory Wall” for Multi-Dimensional Seismic Processing with Algebraic Compression on Cerebras CS-2 Systems appeared first on Cerebras.

]]>
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model https://arxiv.org/abs/2309.11568#new_tab Fri, 22 Sep 2023 17:28:00 +0000 https://www.cerebras.net/?p=104941 We introduce the Bittensor Language Model, called “BTLM-3B-8K”, a new state-of-the-art 3 billion parameter open-source language model. BTLM-3B-8K was trained on 627B tokens from the SlimPajama dataset with a mixture of 2,048 and 8,192 context lengths. BTLM-3B-8K outperforms all existing…

The post BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model appeared first on Cerebras.

]]>
Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models https://arxiv.org/abs/2308.16149#new_tab Thu, 31 Aug 2023 19:39:26 +0000 https://www.cerebras.net/?p=104914 We introduce Jais and Jais-chat, new state-of-the-art Arabic-centric foundation and instruction-tuned open generative large language models (LLMs). The models are based on the GPT-3 decoder-only architecture and are pretrained on a mixture of Arabic and English texts, including source code…

The post Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models appeared first on Cerebras.

]]>
Cerebras Architecture Deep Dive: First Look Inside the Hardware/Software Co-Design for Deep Learning https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/IEEE%20Micro%202023-03%20Hot%20Chips%2034%20Cerebras%20Architecture%20Deep%20Dive.pdf#new_tab Mon, 22 May 2023 20:15:11 +0000 https://www.cerebras.net/?p=104721 IEEE Micro Volume 34, Issue 3, focuses on papers from last year’s Hot Chips 34 conference.
This article describes the Cerebras architecture and how it is designed specifically with this purpose, from the ground up, as a wafer-sized chip to…

The post Cerebras Architecture Deep Dive: First Look Inside the Hardware/Software Co-Design for Deep Learning appeared first on Cerebras.

]]>
Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster https://arxiv.org/abs/2304.03208#new_tab Fri, 07 Apr 2023 17:24:10 +0000 https://www.cerebras.net/?p=104639 We introduce Cerebras-GPT, a family of open compute-optimal language models scaled from 111M to 13B parameters. We train Cerebras-GPT models on the Eleuther Pile dataset following DeepMind Chinchilla scaling rules for efficient pre-training (highest accuracy for a given compute budget).…

The post Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster appeared first on Cerebras.

]]>
Training Giant Neural Networks Using Weight Streaming on Cerebras Wafer-Scale Systems https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/Virtual%20Booth%20Docs/CS%20Weight%20Streaming%20White%20Paper.pdf#new_tab Fri, 24 Mar 2023 18:00:44 +0000 https://cerebras.net/?p=103260 In this paper, we survey existing approaches used to scale training to clusters of compute units and explore the limitations of each in the face of giant models. We present a new paradigm for giant model training, called weight streaming,…

The post Training Giant Neural Networks Using Weight Streaming on Cerebras Wafer-Scale Systems appeared first on Cerebras.

]]>
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency https://arxiv.org/abs/2303.11525#new_tab Wed, 22 Mar 2023 16:53:24 +0000 https://www.cerebras.net/?p=104632 Replacing dense layers with Sparse-IFT leads to significant improvements across computer vision (CV) and natural language processing (NLP) tasks, including ResNet-18 on ImageNet (+3.5%) and GPT-3 Small on WikiText-103 (-0.4 PPL), both matching larger dense model variants with 2x or…

The post Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency appeared first on Cerebras.

]]>
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models https://arxiv.org/abs/2303.10464#new_tab Tue, 21 Mar 2023 16:41:45 +0000 https://www.cerebras.net/?p=104624 Presented at the ICLR 2023 Workshop on Sparsity in Neural Networks.
In this work, we show the benefits of using unstructured weight sparsity to train only a subset of weights during pre-training (Sparse Pre-training) and then recover the representational capacity…

The post SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models appeared first on Cerebras.

]]>
Wafer-Scale Fast Fourier Transforms https://arxiv.org/pdf/2209.15040.pdf#new_tab Fri, 20 Jan 2023 19:33:42 +0000 https://www.cerebras.net/?p=104308 We have implemented fast Fourier transforms for one, two, and three-dimensional arrays on the Cerebras CS-2, a system whose memory and processing elements reside on a single silicon wafer. The wafer-scale engine (WSE) encompasses a two-dimensional mesh of roughly 850,000…

The post Wafer-Scale Fast Fourier Transforms appeared first on Cerebras.

]]>
GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics https://www.biorxiv.org/content/10.1101/2022.10.10.511571v2#new_tab Thu, 24 Nov 2022 04:58:55 +0000 https://www.cerebras.net/?p=104094 Our work seeks to transform how new and emergent variants of pandemic causing viruses, specially SARS-CoV-2, are identified and classified. By adapting large language models (LLMs) for genomic data, we build genome-scale language models (GenSLMs) which can learn the evolutionary…

The post GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics appeared first on Cerebras.

]]>
Disruptive Changes in Field Equation Modeling: A Simple Interface for Wafer Scale Engines https://arxiv.org/abs/2209.13768#new_tab Thu, 29 Sep 2022 03:47:31 +0000 https://www.cerebras.net/?p=104093 We present a high-level and accessible Application Programming Interface (API) for the solution of field equations on the Cerebras Systems Wafer-Scale Engine (WSE) with over two orders of magnitude performance gain relative to traditional distributed computing approaches. The domain-specific API…

The post Disruptive Changes in Field Equation Modeling: A Simple Interface for Wafer Scale Engines appeared first on Cerebras.

]]>
TensorFlow as a DSL for stencil-based computation on the Cerebras Wafer-Scale Engine https://arxiv.org/abs/2210.04795#new_tab Fri, 26 Aug 2022 17:10:18 +0000 https://www.cerebras.net/?p=104540 The Cerebras Wafer Scale Engine (WSE) is an accelerator that combines hundreds of thousands of AI-cores onto a single chip. Whilst this technology has been designed for machine learning workloads, the significant amount of available raw compute means that it…

The post TensorFlow as a DSL for stencil-based computation on the Cerebras Wafer-Scale Engine appeared first on Cerebras.

]]>
NETL Researchers Work to Unlock Potential of Artificial Intelligence in Climate Modeling https://www.environmental-expert.com/news/netl-researchers-work-to-unlock-potential-of-artificial-intelligence-in-climate-modeling-1075329#new_tab Tue, 19 Jul 2022 18:57:48 +0000 https://www.cerebras.net/?p=103743 esearchers at the U.S. National Energy Technology Laboratory (NETL) are helping the National Center for Atmospheric Research (NCAR) unlock the potential of an advanced artificial intelligence (AI) computing resource to perform critical climate modeling that could lead to better climate…

The post NETL Researchers Work to Unlock Potential of Artificial Intelligence in Climate Modeling appeared first on Cerebras.

]]>
Meet the nominees for the 2022 VentureBeat Women in AI Awards! https://venturebeat.com/2022/07/08/meet-the-nominees-for-the-2022-venturebeat-women-in-ai-awards/?utm_campaign=Corporate%20PR%202022&utm_content=214438304&utm_medium=social&utm_source=twitter&hss_channel=tw-751545566778171392#new_tab Fri, 08 Jul 2022 21:34:18 +0000 https://www.cerebras.net/?p=103753 Two Cerebras Systems engineers are finalists for VentureBeat’s Women in AI awards!…

The post Meet the nominees for the 2022 VentureBeat Women in AI Awards! appeared first on Cerebras.

]]>
Age Checks, Theft Prevention, Minecraft AI, Autism, Responsible AI https://www.cerebras.net/in-the-news/this-week-in-enterprise-tech-498-size-matters/ Wed, 06 Jul 2022 17:50:21 +0000 https://www.cerebras.net/?p=103750

The post Age Checks, Theft Prevention, Minecraft AI, Autism, Responsible AI appeared first on Cerebras.

]]>
RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network https://arxiv.org/abs/2206.14098#new_tab Tue, 28 Jun 2022 21:38:33 +0000 https://www.cerebras.net/?p=103704 This work introduces the RevSilo, the first reversible module for bidirectional multi-scale feature fusion. Like other reversible methods, RevSilo eliminates the need to store hidden activations by recomputing them. Existing reversible methods, however, do not apply to multi-scale feature fusion…

The post RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network appeared first on Cerebras.

]]>
Cerebras trains 20 billion parameter AI model on a single system, sets new record https://www.datacenterdynamics.com/en/news/cerebras-trains-20-billion-parameter-ai-model-on-a-single-system-sets-new-record/#new_tab Mon, 27 Jun 2022 22:01:37 +0000 https://www.cerebras.net/?p=103757

The post Cerebras trains 20 billion parameter AI model on a single system, sets new record appeared first on Cerebras.

]]>
Training a 20–Billion Parameter AI Model on a Single Processor https://www.eetimes.com/training-a-20-billion-parameter-ai-model-on-a-single-processor/#new_tab Fri, 24 Jun 2022 22:12:32 +0000 https://www.cerebras.net/?p=103682 Cerebras has shown off the capabilities of its second–generation wafer–scale engine, announcing it has set the record for the largest AI model ever trained on a single device.
For the first time, a natural language processing network with 20 billion…

The post Training a 20–Billion Parameter AI Model on a Single Processor appeared first on Cerebras.

]]>
Cerebras breaks record for largest AI models trained on a single device https://www.siliconrepublic.com/machines/cerebras-ai-model-trained-single-device#new_tab Thu, 23 Jun 2022 23:12:47 +0000 https://www.cerebras.net/?p=103759 esearchers at the U.S. National Energy Technology Laboratory (NETL) are helping the National Center for Atmospheric Research (NCAR) unlock the potential of an advanced artificial intelligence (AI) computing resource to perform critical climate modeling that could lead to better climate…

The post Cerebras breaks record for largest AI models trained on a single device appeared first on Cerebras.

]]>
Why The Cerebras CS-2 Machine is a Big Deal https://www.cerebras.net/in-the-news/why-the-cerebras-cs-2-machine-is-a-big-deal/ Thu, 23 Jun 2022 22:06:07 +0000 https://www.cerebras.net/?p=103676

The post Why The Cerebras CS-2 Machine is a Big Deal appeared first on Cerebras.

]]>
Cerebras Slays GPUs, Breaks Record for Largest AI Models Trained on a Single Device https://www.tomshardware.com/news/cerebras-slays-gpus-breaks-record-for-largest-ai-models-trained-on-a-single-device#new_tab Wed, 22 Jun 2022 14:25:41 +0000 https://www.cerebras.net/?p=103667 Democratizing large AI Models without HPC scaling requirements.…

The post Cerebras Slays GPUs, Breaks Record for Largest AI Models Trained on a Single Device appeared first on Cerebras.

]]>
Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win https://www.hpcwire.com/2022/06/22/cerebras-systems-thinks-forward-on-ai-chips-as-it-claims-performance-win/#new_tab Wed, 22 Jun 2022 14:21:45 +0000 https://www.cerebras.net/?p=103666

The post Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win appeared first on Cerebras.

]]>
Cerebras Systems sets record for largest AI models ever trained on one device https://venturebeat.com/2022/06/22/cerebras-systems-sets-record-for-largest-ai-models-ever-trained-on-one-device/#new_tab Wed, 22 Jun 2022 14:19:16 +0000 https://www.cerebras.net/?p=103665

The post Cerebras Systems sets record for largest AI models ever trained on one device appeared first on Cerebras.

]]>
Cerebras just built a big chip that could democratize AI https://www.protocol.com/enterprise/cerebras-ai-wafer-scale-engine#new_tab Wed, 22 Jun 2022 14:11:28 +0000 https://www.cerebras.net/?p=103662 Chip startup Cerebras has developed a foot-wide piece of silicon, compared to average chips measured in millimeters, that makes training AI cheap, and easy.…

The post Cerebras just built a big chip that could democratize AI appeared first on Cerebras.

]]>
#77 – VITALIY CHILEY (Cerebras) https://www.cerebras.net/in-the-news/77-vitaliy-chiley-cerebras/ Thu, 16 Jun 2022 23:58:41 +0000 https://www.cerebras.net/?p=103761

The post #77 – VITALIY CHILEY (Cerebras) appeared first on Cerebras.

]]>
Deep Learning Programming at Scale https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/Whitepapers/Cerebras-Whitepaper-ProgrammingAtScale.pdf#new_tab Mon, 06 Jun 2022 18:27:06 +0000 https://cerebras.net/?p=103263 Deep learning has become one of the most important computational workloads of our generation, advancing applications across industries from healthcare to autonomous driving. But it is also profoundly computationally intensive. (Updated June 2022.)…

The post Deep Learning Programming at Scale appeared first on Cerebras.

]]>
Eine neue Maschine für KI und HPC https://www.bigdata-insider.de/eine-neue-maschine-fuer-ki-und-hpc-a-1f37d4c50b8f524dc9f9ee84121d4ce6/#new_tab Thu, 02 Jun 2022 18:42:32 +0000 https://www.cerebras.net/?p=103593

The post Eine neue Maschine für KI und HPC appeared first on Cerebras.

]]>
NCSA Deploys Cerebras CS-2 in New HOLL-I Supercomputer for Large-Scale AI https://www.hpcwire.com/off-the-wire/ncsa-deploys-cerebras-cs-2-in-new-holl-i-supercomputer-for-large-scale-ai/#new_tab Wed, 01 Jun 2022 00:19:48 +0000 https://www.cerebras.net/?p=103766

The post NCSA Deploys Cerebras CS-2 in New HOLL-I Supercomputer for Large-Scale AI appeared first on Cerebras.

]]>
Leading Supercomputer Sites Choose Cerebras for AI Acceleration https://aithority.com/computing/leading-supercomputer-sites-choose-cerebras-for-ai-acceleration/#new_tab Wed, 01 Jun 2022 00:12:23 +0000 https://www.cerebras.net/?p=103764

The post Leading Supercomputer Sites Choose Cerebras for AI Acceleration appeared first on Cerebras.

]]>
LRZ Adds Mega AI System as It Stacks up on Future Computing Systems https://www.hpcwire.com/2022/05/25/lrz-adds-mega-ai-aystem-as-it-stacks-up-on-future-computing-systems/#new_tab Fri, 27 May 2022 04:35:12 +0000 https://www.cerebras.net/?p=103579

The post LRZ Adds Mega AI System as It Stacks up on Future Computing Systems appeared first on Cerebras.

]]>
HPE, Cerebras build AI supercomputer for scientific research https://www.theregister.com/2022/05/25/hpe_cerebras_lrz/#new_tab Fri, 27 May 2022 04:31:27 +0000 https://www.cerebras.net/?p=103578

The post HPE, Cerebras build AI supercomputer for scientific research appeared first on Cerebras.

]]>
HPE is building a rapid AI supercomputer powered by the world’s largest CPU https://www.techradar.com/news/hpe-is-building-a-rapid-ai-supercomputer-powered-by-the-worlds-largest-cpu#new_tab Fri, 27 May 2022 00:27:46 +0000 https://www.cerebras.net/?p=103767

The post HPE is building a rapid AI supercomputer powered by the world’s largest CPU appeared first on Cerebras.

]]>
Bio-IT World Judges, Community Honor Six Outstanding New Products https://www.bio-itworld.com/news/2022/05/05/bio-it-world-judges-community-honor-six-outstanding-new-products#new_tab Fri, 06 May 2022 00:20:42 +0000 https://www.cerebras.net/?p=103571

The post Bio-IT World Judges, Community Honor Six Outstanding New Products appeared first on Cerebras.

]]>
Argonne Talks AI Accelerators for COVID Research https://www.hpcwire.com/2022/04/28/argonne-talks-ai-accelerators-for-covid-research/#new_tab Fri, 29 Apr 2022 00:03:11 +0000 https://www.cerebras.net/?p=103570

The post Argonne Talks AI Accelerators for COVID Research appeared first on Cerebras.

]]>
Cerebras Systems’ dinner plate-sized chips are revolutionizing the field of AI https://thenextweb.com/news/cerebras-systems-dinner-plate-sized-chips-are-revolutionizing-field-of-ai#new_tab Thu, 28 Apr 2022 23:59:47 +0000 https://www.cerebras.net/?p=103568 When the chip is the size of a big pizza pie… that’s Cerebras…

The post Cerebras Systems’ dinner plate-sized chips are revolutionizing the field of AI appeared first on Cerebras.

]]>
Accelerating insights in large scale AI projects https://www.enterpriseai.news/2022/04/25/accelerating-insights-in-large-scale-ai-projects/#new_tab Mon, 25 Apr 2022 23:38:27 +0000 https://www.cerebras.net/?p=103566

The post Accelerating insights in large scale AI projects appeared first on Cerebras.

]]>
A Templated C++ Interface for ISL https://cerebras.net/wp-content/uploads/2021/04/IMPACT_2021_paper_2.pdf#new_tab Sat, 23 Apr 2022 04:24:02 +0000 https://cerebras.net/?p=103302 Polyhedral libraries typically support only a very limited collection of types for representing objects, corresponding to broad mathematical classes such as sets, binary relations and functions.…

The post A Templated C++ Interface for ISL appeared first on Cerebras.

]]>
Cerebras, TotalEnergies Announce Stencil Algorithm Leap https://www.hpcwire.com/2022/04/21/cerebras-totalenergies-announce-stencil-algorithm-leap/#new_tab Fri, 22 Apr 2022 16:38:27 +0000 https://cerebras.net/?p=103292

The post Cerebras, TotalEnergies Announce Stencil Algorithm Leap appeared first on Cerebras.

]]>
How healthcare and pharmaceutical research will accelerate through AI https://www.cerebras.net/in-the-news/how-healthcare-and-pharmaceutical-research-will-accelerate-through-ai/ Wed, 20 Apr 2022 22:28:48 +0000 https://cerebras.net/?p=103249

The post How healthcare and pharmaceutical research will accelerate through AI appeared first on Cerebras.

]]>
Cerebras Expands Support for Pytorch and Tensorflow Machine Learning Frameworks on the Wafer-Scale Engine 2 Processors that Power Its CS-2 System https://www.marktechpost.com/2022/04/20/cerebras-expands-support-for-pytorch-and-tensorflow-machine-learning-frameworks-on-the-wafer-scale-engine-2-processors-that-power-its-cs-2-system/#new_tab Wed, 20 Apr 2022 22:26:50 +0000 https://cerebras.net/?p=103247 Deep learning has emerged as our generation’s most critical computing job. Tasks that were once the unique realm of humans are now regularly executed at human or superhuman levels by computers.…

The post Cerebras Expands Support for Pytorch and Tensorflow Machine Learning Frameworks on the Wafer-Scale Engine 2 Processors that Power Its CS-2 System appeared first on Cerebras.

]]>
Accelerating Discovery: Andrew Feldman, Co-Founder and CEO, Cerebras Systems https://eclipse.vc/blog/accelerating-discovery-andrew-feldman-co-founder-and-ceo-cerebras-systems/?utm_campaign=Corporate%20PR%202022&utm_content=205225902&utm_medium=social&utm_source=twitter&hss_channel=tw-751545566778171392#new_tab Wed, 20 Apr 2022 21:51:11 +0000 https://cerebras.net/?p=103246 New hardware can substantially increase the speed and efficiency of deep neural network training. To guide the development of future hardware architectures, it is pertinent to explore the hardware and machine learning properties of alternative training algorithms.…

The post Accelerating Discovery: Andrew Feldman, Co-Founder and CEO, Cerebras Systems appeared first on Cerebras.

]]>
The World’s Largest Chip Just Received A Major Machine Learning-Flavored Upgrade https://www.indianext.co.in/the-worlds-largest-chip-just-received-a-major-machine-learning-flavored-upgrade/#new_tab Fri, 15 Apr 2022 19:03:02 +0000 https://www.cerebras.net/?p=103780

The post The World’s Largest Chip Just Received A Major Machine Learning-Flavored Upgrade appeared first on Cerebras.

]]>
PSC UPGRADES NEOCORTEX AI SUPERCOMPUTER WITH NEW CEREBRAS ENGINES https://www.nextplatform.com/2022/04/14/psc-upgrades-neocortex-ai-supercomputer-with-new-cerebras-engines/#new_tab Thu, 14 Apr 2022 22:35:43 +0000 https://cerebras.net/?p=103251 If you were going to build an electronic brain in 2022, it might look something like the Neocortex supercomputer at the Pittsburgh Supercomputing Center at Carnegie Mellon University. That machine, which was only installed last year, has now got a…

The post PSC UPGRADES NEOCORTEX AI SUPERCOMPUTER WITH NEW CEREBRAS ENGINES appeared first on Cerebras.

]]>
Massively scalable stencil algorithm https://arxiv.org/pdf/2204.03775.pdf#new_tab Thu, 07 Apr 2022 17:54:07 +0000 https://www.cerebras.net/?p=103623 Stencil computations lie at the heart of many scientific and industrial applications. Unfortunately, stencil algorithms perform poorly on machines with cache based memory hierarchy, due to low reuse of memory accesses. This work shows that for stencil computation a novel…

The post Massively scalable stencil algorithm appeared first on Cerebras.

]]>
Powering Extreme-Scale HPC with Cerebras WaferScale Accelerators https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/Powering-Extreme-Scale-HPC-with-Cerebras.pdf#new_tab Wed, 06 Apr 2022 23:54:21 +0000 https://cerebras.net/?p=103256 In this paper, we will explore the challenges facing HPC developers today and show how the Cerebras architecture can help to accelerate sparse linear algebra and tensor workloads, stencilbased partial differential equation (PDE) solvers, N-body problems, and spectral algorithms such…

The post Powering Extreme-Scale HPC with Cerebras WaferScale Accelerators appeared first on Cerebras.

]]>
The Cerebras Software Development Kit: A Technical Overview https://f.hubspotusercontent30.net/hubfs/8968533/Cerebras%20SDK%20Technical%20Overview%20White%20Paper.pdf?utm_campaign=Tech%20Leadership%20PR%202022&utm_source=SDK_WP#new_tab Tue, 08 Feb 2022 01:06:52 +0000 https://cerebras.net/?p=103259 Cerebras has introduced a new software development kit (SDK) which allows anyone to take advantage of the strengths of the CS-2 system. Developers can use the Cerebras SDK to create custom kernels for their standalone applications or modify the kernel…

The post The Cerebras Software Development Kit: A Technical Overview appeared first on Cerebras.

]]>
Epigenomic language models powered by Cerebras https://arxiv.org/abs/2112.07571#new_tab Thu, 27 Jan 2022 04:25:10 +0000 https://cerebras.net/?p=103293 Large scale self-supervised pre-training of Transformer language models has advanced the field of Natural Language Processing and shown promise in cross-application to the biological `languages’ of proteins and DNA. Learning effective representations of DNA sequences using large genomic sequence corpuses…

The post Epigenomic language models powered by Cerebras appeared first on Cerebras.

]]>
GlaxoSmithKline and Cerebras are Advancing the State of the Art in AI for Drug Discovery https://www.cerebras.net/blog/glaxosmithkline-and-cerebras-are-advancing-the-state-of-the-art-in-ai-for-drug-discovery/ Wed, 26 Jan 2022 15:55:45 +0000 https://cerebras.net/?p=1957 Kim Branson, SVP & Global Head of Artificial Intelligence and Machine Learning, Meredith Trotter, and Stephen Young, GSK.
Natalia Vassilieva, Director of Product, Machine Learning, and Rebecca Lewington, Technology Evangelist, Cerebras Systems.
January 26, 2022
Artificial intelligence has the potential…

The post GlaxoSmithKline and Cerebras are Advancing the State of the Art in AI for Drug Discovery appeared first on Cerebras.

]]>
A Big Chip for Big Science: Watching the COVID-19 Virus in Action https://www.cerebras.net/blog/a-big-chip-for-big-science-watching-the-covid-19-virus-in-action/ Tue, 14 Dec 2021 14:00:42 +0000 https://cerebras.net/?p=1918

The post A Big Chip for Big Science: Watching the COVID-19 Virus in Action appeared first on Cerebras.

]]>
Microprocessor at 50. The Path to Successful Wafer-Scale Integration: The Cerebras Story https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/IEEE%20Micro%202021-11%20Path%20to%20Wafer-Scale%20Integration.pdf#new_tab Fri, 19 Nov 2021 22:38:04 +0000 https://www.cerebras.net/?p=104723 IEEE Micro Volume 41, Issue 6, took a look back at the first 50 years of the microprocessor, and forward to what’s next. It featured this article by Gary Lauterbach, Co-Founder
and the Chief Technology Officer of Cerebras Systems, which…

The post Microprocessor at 50. The Path to Successful Wafer-Scale Integration: The Cerebras Story appeared first on Cerebras.

]]>
Intelligent Resolution: Integrating Cryo-EM with AI-driven Multi-resolution Simulations to Observe the SARS-CoV-2 Replication-Transcription Machinery in Action https://www.biorxiv.org/content/10.1101/2021.10.09.463779v1.full.pdf#new_tab Thu, 18 Nov 2021 05:25:03 +0000 https://cerebras.net/?p=103303 The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) replication transcription complex (RTC) is a multi-domain protein responsible for replicating and transcribing the viral mRNA inside a human cell. Attacking RTC function with pharmaceutical com- pounds is a pathway to treating COVID-19.…

The post Intelligent Resolution: Integrating Cryo-EM with AI-driven Multi-resolution Simulations to Observe the SARS-CoV-2 Replication-Transcription Machinery in Action appeared first on Cerebras.

]]>
Cerebras – Eye On AI https://www.eye-on.ai/podcast-086#new_tab Mon, 18 Oct 2021 14:52:08 +0000 https://cerebras.net/?p=103552 Andrew Feldman, one of the founders and CEO of Cerebras Systems, talks about the company’s wafer-scale computer chip optimized for machine learning and about the network of chips that company has built that has as much computing power as a…

The post Cerebras – Eye On AI appeared first on Cerebras.

]]>
Cerebras Systems Enables Brain-scale AI https://f.hubspotusercontent30.net/hubfs/8968533/Cerebras%20Lays%20the%20Foundation%20for%20Brain-Scale%20AI.pdf#new_tab Wed, 22 Sep 2021 00:12:54 +0000 https://cerebras.net/?p=103261 This research paper explores Cerebras System’s approach to create a brain-scale AI and the new technologies that could enable that feat. But first, let’s put this discussion into the proper context. Just how big is a 120 trillion-parameter model?…

The post Cerebras Systems Enables Brain-scale AI appeared first on Cerebras.

]]>
Stream-AI-MD: streaming AI-driven adaptive molecular simulations for heterogeneous computing platforms https://dl.acm.org/doi/10.1145/3468267.3470578#new_tab Tue, 06 Jul 2021 04:02:23 +0000 https://cerebras.net/?p=103299 Emerging hardware tailored for artificial intelligence (AI) and machine learning (ML) methods provide novel means to couple them with traditional high performance computing (HPC) workflows involving molecular dynamics (MD) simulations. We propose Stream-AI-MD, a novel instance of applying deep learning…

The post Stream-AI-MD: streaming AI-driven adaptive molecular simulations for heterogeneous computing platforms appeared first on Cerebras.

]]>
Limits to Scale-Out for Training Language Models https://f.hubspotusercontent30.net/hubfs/8968533/Cerebras-Whitepaper_ScalingNLPTraining-4.pdf#new_tab Fri, 25 Jun 2021 00:23:58 +0000 https://cerebras.net/?p=103262 Natural language processing has revolutionized how data is consumed, meaning that computational demand has skyrocketed. Companies in every industry are using graphics processing unit (GPU) clusters to keep up. But is this really the best solution?…

The post Limits to Scale-Out for Training Language Models appeared first on Cerebras.

]]>
Train Large BERT Models Faster with Cerebras Systems https://f.hubspotusercontent30.net/hubfs/8968533/Cerebras-Whitepaper_ScalingBERT_V6.pdf#new_tab Tue, 25 May 2021 00:39:22 +0000 https://cerebras.net/?p=103266 Unstructured text is one of the largest human-generated data sources. Web data, academic publications, emails, traditional media, texts, instant messages, digital records, social media — all hold an enormous volume of unstructured text.…

The post Train Large BERT Models Faster with Cerebras Systems appeared first on Cerebras.

]]>
Cerebras Systems: Achieving Industry Best AI Performance Through A Systems Approach https://8968533.fs1.hubspotusercontent-na1.net/hubfs/8968533/Whitepapers/Cerebras-CS-2-Whitepaper.pdf#new_tab Wed, 07 Apr 2021 00:51:34 +0000 https://cerebras.net/?p=103454 The CS-2 is a system solution that consists of innovations across three dimensions: a) the second
generation Cerebras Wafer Scale Engine (WSE-2) — the industry’s largest and only multi-trilliontransistor processor, b) the Cerebras System and c) the Cerebras software platform.…

The post Cerebras Systems: Achieving Industry Best AI Performance Through A Systems Approach appeared first on Cerebras.

]]>
Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation https://www.springerprofessional.de/en/memory-efficient-3d-u-net-with-reversible-mobile-inverted-bottle/19007114#new_tab Sat, 06 Mar 2021 04:40:10 +0000 https://cerebras.net/?p=103295 We propose combining memory saving techniques with traditional U-Net architectures to increase the complexity of the models on the Brain Tumor Segmentation (BraTS) challenge. The BraTS challenge consists of a 3D segmentation of a 240 240 155 4 input image…

The post Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation appeared first on Cerebras.

]]>
Pipelined Backpropagation at Scale: Training Large Models without Batches https://proceedings.mlsys.org/paper/2021/hash/9b8619251a19057cff70779273e95aa6-Abstract.html#new_tab Mon, 01 Mar 2021 18:29:17 +0000 https://cerebras.net/?p=103242 New hardware can substantially increase the speed and efficiency of deep neural network training. To guide the development of future hardware architectures, it is pertinent to explore the hardware and machine learning properties of alternative training algorithms.…

The post Pipelined Backpropagation at Scale: Training Large Models without Batches appeared first on Cerebras.

]]>
System Integration of Neocortex, a Unique, Scalable AI Platform https://dl.acm.org/doi/abs/10.1145/3437359.3465604#new_tab Thu, 04 Feb 2021 05:21:34 +0000 https://cerebras.net/?p=103301 The Pittsburgh Supercomputing Center, in partnership with Cerebras Systems and Hewlett Packard Enterprise, has deployed Neocortex, an innovative computing platform that accelerates scientific discovery by vastly shortening the time required for deep learning training and fosters greater integration of deep…

The post System Integration of Neocortex, a Unique, Scalable AI Platform appeared first on Cerebras.

]]>
EPCC Selects Cerebras Systems AI Supercomputer to Rapidly Accelerate AI Research https://www.businesswire.com/news/home/20210203005062/en/EPCC-Selects-Cerebras-Systems-AI-Supercomputer-to-Rapidly-Accelerate-AI-Research#new_tab Thu, 04 Feb 2021 01:22:55 +0000 https://cerebras.net/?p=103524

The post EPCC Selects Cerebras Systems AI Supercomputer to Rapidly Accelerate AI Research appeared first on Cerebras.

]]>
Fast Stencil-Code Computation on a Wafer-Scale Processor https://arxiv.org/abs/2010.03660#new_tab Fri, 23 Oct 2020 04:00:21 +0000 https://cerebras.net/?p=103298 The performance of CPU-based and GPU-based systems is often low for PDE codes, where large, sparse, and often structured systems of linear equations must be solved. Iterative solvers are limited by data movement, both between caches and memory and between…

The post Fast Stencil-Code Computation on a Wafer-Scale Processor appeared first on Cerebras.

]]>
Fast Stencil-Code Computation on a Wafer-Scale Processor https://arxiv.org/abs/2010.03660#new_tab Wed, 07 Oct 2020 14:55:41 +0000 https://www.cerebras.net/?p=103707 The performance of CPU-based and GPU-based systems is often low for PDE codes, where large, sparse, and often structured systems of linear equations must be solved. Iterative solvers are limited by data movement, both between caches and memory and between…

The post Fast Stencil-Code Computation on a Wafer-Scale Processor appeared first on Cerebras.

]]>
The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain https://arxiv.org/abs/2007.03774#new_tab Wed, 08 Jul 2020 03:57:12 +0000 https://cerebras.net/?p=103297 In this essay, we explore a point of intersection between deep learning and neuroscience, through the lens of large language models, transfer learning and network compression.…

The post The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain appeared first on Cerebras.

]]>
Generating SIMD Instructions for Cerebras CS-1 using Polyhedral Compilation Techniques https://cerebras.net/wp-content/uploads/2021/04/IMPACT_2020_paper_3.pdf#new_tab Sun, 23 Feb 2020 05:03:52 +0000 https://cerebras.net/?p=103300 The Cerebras CS-1 is a computing system based on a waferscale processor having nearly 400,000 compute cores. It is intended for training of and inference on deep neural networks.…

The post Generating SIMD Instructions for Cerebras CS-1 using Polyhedral Compilation Techniques appeared first on Cerebras.

]]>
Online Normalization for Training Neural Networks https://papers.nips.cc/paper/2019/hash/cb3ce9b06932da6faaa7fc70d5b5d2f4-Abstract.html#new_tab Fri, 29 Nov 2019 23:15:48 +0000 https://cerebras.net/?p=103346 Polyhedral libraries typically support only a very limited collection of types for representing objects, corresponding to broad mathematical classes such as sets, binary relations and functions.…

The post Online Normalization for Training Neural Networks appeared first on Cerebras.

]]>
Online Normalization for Training Neural Networks, NeurIPS 2019 https://papers.nips.cc/paper/2019/hash/cb3ce9b06932da6faaa7fc70d5b5d2f4-Abstract.html#new_tab Thu, 16 May 2019 03:48:47 +0000 https://cerebras.net/?p=103296 Online Normalization is a new technique for normalizing the hidden activations of a neural network. Like Batch Normalization, it normalizes the sample dimension. While Online Normalization does not use batches, it is as accurate as Batch Normalization.…

The post Online Normalization for Training Neural Networks, NeurIPS 2019 appeared first on Cerebras.

]]>