• Meta's new AI card is one step to reduce its reliance on Nvidia's

    From TechnologyDaily@1337:1/100 to All on Mon Apr 15 17:45:05 2024
    Meta's new AI card is one step to reduce its reliance on Nvidia's GPUs despite spending billions on H100 and A100, Facebook's parent firm sees a clear path to an RTX-free future

    Date:
    Mon, 15 Apr 2024 17:38:16 +0000

    Description:
    Lid lifted on its next Meta Training and Inference Accelerator (MTIA) chip
    the company hopes will reduce its reliance on Nvidia GPUs.

    FULL STORY ======================================================================

    Meta recently unveiled details on the companys AI training infrastructure , revealing that it currently relies on almost 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM.

    Like a lot of major tech firms involved in AI, Meta wants to reduce its reliance on Nvidias hardware and has taken another step in that direction.

    Meta already has its own AI inference accelerator, Meta Training and
    Inference Accelerator (MTIA), which is tailored for the social media giant's in-house AI workloads, especially those improving experiences across its various products. The company has now shared insights about its second-generation MTIA, which significantly improves upon its predecessor. Software stack

    This revamped version of MTIA, which can handle inference but not training, doubles the compute and memory bandwidth of the past solution, maintaining
    the close tie-in with Meta's workloads. It is designed to efficiently serve ranking and recommendation models that deliver suggestions to users. The new chip architecture aims to provide a balanced mix of compute power, memory bandwidth, and memory capacity to meet the unique needs of these models. The architecture enhances SRAM capability, enabling high performance even with reduced batch sizes.

    The latest Accelerator consists of an 8x8 grid of processing elements (PEs) offering a dense compute performance 3.5 times greater and a sparse compute performance that's reportedly seven times better than MTIA v1. The
    advancement stems from optimizations in the new architecture around the pipelining of sparse compute, as well as how data is fed into the PEs. Key features include triple the size of local storage, double the on-chip SRAM
    and a 3.5X increase in its bandwidth, and double the LPDDR5 capacity.

    Along with the hardware, Meta is also focusing on co-designing the software stack with the silicon to synergize an optimal overall inference solution.
    The company says it has developed a robust, rack-based system that accommodates up to 72 accelerators, designed to clock the chip at 1.35GHz and run it at 90W.

    Among other developments, Meta says it has also upgraded the fabric between accelerators, increasing the bandwidth and system scalability significantly. The Triton-MTIA, a backend compiler built to generate high-performance code for MTIA hardware, further optimizes the software stack.

    The new MTIA won't have a massive impact on Meta's roadmap towards a future less reliant on Nvidia's GPUs, but it is another step in that direction. More from TechRadar Pro Meta has done something that will get Nvidia and AMD very, very worried Meta looking to use exotic, custom CPU in its data centers for
    AI Meta sheds more light on how it is evolving Llama 3 training



    ======================================================================
    Link to news story: https://www.techradar.com/pro/metas-new-ai-card-is-one-step-to-reduce-its-reli ance-on-nvidias-gpus-despite-spending-billions-on-h100-and-a100-facebooks-pare nt-firm-sees-a-clear-path-to-an-rtx-free-future


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)