At the Vision 2024 event tonight, Intel unveiled its next-generation Gaudi 3 AI chip, slated for a widespread release through OEM systems in the third quarter of 2024.
According to reports, the new Gaudi 3 boasts a 170% improvement in training performance and a 50% boost in inference capabilities compared to the NVIDIA H100. Additionally, efficiency has been enhanced by 40%, all at a significantly lower cost. (Note: The H100 is NVIDIA's previous-generation product, and Intel did not compare it to the latest Blackwell series.) In addition, Intel has introduced a new brand name for its data center CPU product portfolio: chips formerly codenamed Granite Rapids and Sierra Forest will now be referred to as the "Xeon 6" series. These chips are scheduled to be launched later this year and will support the standardized MXFP4 data format for enhanced performance.
Intel has also announced the development of AI NIC ASICs and AI NIC microchips for Ethernet networks. These microchips will be utilized in their future XPU and Guadi 3 processors and will be provided to external customers through Intel's foundries. However, Intel has not disclosed further details about these networking products. Intel claims that the FP8 performance of Gaudi 3 is double that of the previous generation product, while the BF16 performance is quadruple. Additionally, the network bandwidth is twice that of the previous generation, and the memory bandwidth is 1.5 times higher. Gaudi offers two form factors, with the OAM (OCP Accelerator Module) HL-325L featuring a common design for systems based on high-performance GPUs.
This accelerator integrates 128GB of HBM2e, providing a bandwidth of 3.7 TB/s. Additionally, it boasts 24 200 Gbps Ethernet RDMA NICs.
The HL-325L OAM module has a 900W TDP (potentially higher with apparent liquid cooling) and a rated FP8 performance of 1,835 TFLOPS. OAMs are deployed in groups of eight per server node and can then scale up to 1,024 nodes.