'No one knows yet': Donut design could create quadrillion-transitor compute monster analysts discuss unusual interconnection as Cerebras CEO
acknowledges that we don't know what happens when multiple WSEs are connected
Date:
Tue, 28 May 2024 04:25:20 +0000
Description:
Cerebras' Wafer-Scale Engine excelled at a scientific simulation, but could
do even better if multiple WSEs were lashed together.
FULL STORY ======================================================================
Tri-Labs (comprised of three major US research institutions - the Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL), and Los Alamos National Laboratory (LANL)) has been working with AI firm Cerebras on a number of scientific problems, including breaking the molecular dynamics (MD) timescale barrier.
Theres a paper explaining this particular challenge, which you can read here
, but essentially it refers to the problem of conducting molecular dynamics simulations on a larger timescale than would normally be possible.
The barriers here are twofold: computational power and communication latency between different nodes of an HPC system. Traditionally, to compensate for
the lack of computational power, scientists assign more work to each node and scale up the simulation size with the node count. Unfortunately, the slow inter-node communication caused by high latency further exacerbates the timescale problem. Like a donut
MD simulations are crucial to several scientific fields as they bridge the
gap between quantum electronic methods and continuum mechanics methods. However, these simulations encounter timescale limitations, as they have to account for atomic vibrations, which take place over very short timescales, and other phenomena that occur over much longer periods.
The authors of the paper sought to overcome the timescale barrier by
employing a more efficient computational system, specifically Cerebras' Wafer-Scale Engine.
As The Next Platform explains, The specific simulation was to beam radiation into three different crystal lattices made of tungsten, copper, and tantalum. In these particular simulations, which were for 801,792 atoms in each
lattice, the idea is to bombard the lattices with radiation and see what happens.
Running the simulations on Frontier, the worlds fastest supercomputer based
at the Oak Ridge National Laboratory in Tennessee, and on Quartz at LLNL, scientists were only able to witness nanoseconds of what was happening to the lattices as they were bombarded with radiation. Using WSE, they were given tens of milliseconds of time to watch what happened.
For the tests, Tri-Labs used Cerebras Wafer Scale Engine 2 (WSE-2), rather than the newer, and more powerful WSE-3 launched earlier this year , but as detailed above the results were impressive. As the paper reports, By dedicating a processor core for each simulated atom, we demonstrate a
179-fold improvement in timesteps per second versus the Frontier GPU-based Exascale platform, along with a large improvement in timesteps per unit energy. Reducing every year of runtime to two days unlocks currently inaccessible timescales of slow microstructure transformation processes that are critical for understanding material behavior and function.
The Next Platform s Timothy Prickett Morgan asked Cerebras CEO and co-founder, Andrew Feldman, what happens when you connect multiple wafer
scale engines together and try to run the same simulation and was told no one knows yet.
Prickett Morgan went on to note, The proprietary interconnect in the WSE-2 systems could scale to 192 devices, and with the WSE-3, that number was boosted by more than an order of magnitude to 2,048 devices, but he strongly suspects that the same scaling principles apply to WSEs as apply to GPUs and CPUs.
He went onto suggest, however, that there could be some way to lash WSEs together physically, and make a stovepipe of squares of interconnected WSEs, potentially creating a donut design with power running on the inside and cooling on the outside. As Prickett Morgan concludes, This kind of configuration could not be worse than using InfiniBand or Ethernet to interlink CPUs or GPUs. More from TechRadar Pro The fastest AI chip in the world': Gigantic AI CPU has almost one million cores Worlds largest chip gets beefier: 850 thousand cores for AI PC with six Nvidia RTX 4090 GPUs and
liquid cooling finally gets tested
======================================================================
Link to news story:
https://www.techradar.com/pro/no-one-knows-yet-donut-design-could-create-quadr illion-transitor-compute-monster-analysts-discuss-unusual-interconnection-as-c erebras-ceo-acknowledges-that-we-dont-know-what-happens-when-multiple-wses-are -connected
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)