This is what $1 billion worth of AI GPUs look like Elon Musk publishes video tour of Cortex, X's AI training supercluster powered by Nvidia's now obsolete H100
Date:
Fri, 30 Aug 2024 17:34:00 +0000
Description:
Elon Musk has provided a short video glimpse of Tesla's AI supercluster, Cortex
FULL STORY ======================================================================
We love a good look inside a supercomputer, with one of our recent favorites being the glimpse Nvidia gave us of Eos, the ninth fastest supercomputer on the planet .
Now, Elon Musk has provided a peek at the massive AI supercluster, newly dubbed Cortex, being used by X (formerly Twitter).
The supercluster, currently under construction at Teslas Giga Texas plant, is set to house 70,000 AI servers, with an initial power and cooling requirement of 130 megawatts, scaling up to 500 megawatts by 2026. Tesla's AI strategy
In the video, embedded below, Musk shows rows upon rows of server racks, potentially holding up to 2,000 GPU servers - just a fraction of the 50,000 Nvidia H100 GPUs and 20,000 Tesla hardware units expected to eventually populate Cortex. The video, although brief, offers a rare inside look at the infrastructure that will soon drive Teslas most ambitious AI projects. Video of the inside of Cortex today, the giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI pic.twitter.com/DwJVUWUrb5 August 26, 2024
Cortex is being developed to advance Teslas AI capabilities, particularly for training the Full Self-Driving (FSD) autopilot system used in its cars and
the Optimus robot, an autonomous humanoid set for limited production in 2025. The supercluster's cooling system, featuring massive fans and Supermicro-provided liquid cooling, is designed to handle the extensive power demands, which, Tom's Hardware points out, is comparable to a large coal
power plant.
Cortex is part of Musk's broader strategy to deploy several supercomputers, including the operational Memphis Supercluster, which is powered by 100,000 Nvidia H100 GPUs, and the upcoming $500 million Dojo supercomputer in
Buffalo, New York.
Despite some delays in upgrading to Nvidia's latest Blackwell GPUs, Musk's aggressive acquisition of AI hardware shows how keen Tesla is to be at the forefront of AI development.
The divisive billionaire said earlier this year the company was planning to spend "over a billion dollars" on Nvidia and AMD hardware this year alone
just to stay competitive in the AI space. More from TechRadar Pro Could Tesla be about to make its own silicon? Nvidia is powering a mega Tesla supercomputer with 10,000 H100 GPUs Adopting generative AI to drive softwarization of automobiles
======================================================================
Link to news story:
https://www.techradar.com/pro/this-is-what-1-billion-worth-of-ai-gpus-look-lik e-elon-musk-publishes-video-tour-of-cortex-its-ai-training-supercluster-that-s -powered-by-nvidia-s-now-obsolete-h100
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)