Home > News > Hardware

Nvidia Unleashes Game-Changing AI Chip, Extends Lead

Xiao Leng Tue, Mar 26 2024 06:27 AM EST

Ahead of its GTC 2024 conference, Nvidia teased that Jensen Huang would reveal groundbreaking advances in accelerated computing, generative AI, and robotics. In the midst of AI's meteoric rise, Nvidia's GTC 2024 announcements were nothing short of a blockbuster event. And as promised, Huang delivered his bombshell this morning.

Nvidia Unveils AI Chip with 25x Cost-Energy Improvement

In his "Accelerated Computing Unveiled" keynote, Nvidia announced the Blackwell, its next-generation AI graphics processing unit (GPU), calling it "very, very powerful." Based on the Blackwell architecture, Nvidia will offer the B200 and GB200 series of chips.

The Blackwell platform reportedly enables building and running real-time generative AI on trillion-parameter large language models (LLMs) at 25x lower cost and energy consumption than its predecessor. Nvidia claims the Blackwell architecture represents the most powerful family of AI chips ever created. eec3b4c5-8247-4599-91b3-dd39730e1c75.jpg Blackwell 架构 GPU

  • Blackwell架构 GPU 具有强大的功能,B200 拥有 2080 亿个晶体管,而 H100/H200 拥有 800 亿个晶体管。它们采用台积电 4NP 工艺制造,支持高达 10 万亿个参数的 AI 大模型。
  • 这些芯片提供出色的性能,单个 GPU 可提供 20 petaflops 的 AI 性能,而单个 H100 最多可提供 4 petaflops 的 AI 计算能力。
  • Blackwell 架构 GPU 还提高了能效。例如,训练一个 1.8 万亿参数的 GPT 模型,使用 8000 个 Hopper GPU 需要 90 天的时间和 15 兆瓦的电力。而使用 Blackwell GPU,只使用 2000 个 GPU,在同样的 90 天内只需消耗四分之一的电力。
  • 科技巨头如微软 Azure、AWS 和谷歌云都是 Blackwell 架构的首批采用者。

NVIDIA DGX SuperPOD

  • NVIDIA 还推出了一款新的 AI 超级计算机 NVIDIA DGX SuperPOD,搭载 NVIDIA GB200 Grace Blackwell 超级芯片。
  • 这台 AI 超级计算机旨在处理万亿参数模型,可持续运行超大规模生成式 AI 训练和推理工作负载。
  • 新的 DGX SuperPOD 采用模块化设计,基于 NVIDIA DGX GB200 系统构建,在 FP4 精度下可提供 11.5 exaflops 的 AI 超级计算性能和 240 TB 的快速显存。通过添加机架,可以进一步扩展性能。 bdca28ea-d17f-4e63-99b8-5b456586bae4.png NVIDIA H100 Tensor Core GPU compared to the GB200 superchip, delivers up to 30x better performance for large language model inference workloads.

NVIDIA GB200 also brings a significant boost in performance. Each DGX GB200 system reportedly packs 36 NVIDIA GB200 Superchips, which themselves feature 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. The superchips are connected together by the fifth-generation NVIDIA NVLink to act as a single giant computing engine.

NVIDIA DGX SuperPODs powered by DGX GB200 and DGX B200 systems will be available later this year. “NVIDIA DGX AI supercomputers are the factories for advancing the AI revolution,” said Jensen Huang. “The next generation of DGX SuperPODs combine all of NVIDIA’s latest advances in accelerated computing, networking and software to help every enterprise, industry and nation perfect and deploy their own AI.”

NVIDIA Launches Suite of Microservices

During the keynote, Huang also announced the launch of AI microservices for building and deploying custom applications on their platforms. “The future of software development may be about putting together a bunch of NIMs (NVIDIA inference microservices) to get something trained, deployed,” said Huang, as NVIDIA positions itself as a “foundry” for AI software.

The catalog of cloud-native microservices are built on NVIDIA’s CUDA platform and include NVIDIA’s NIM (NVIDIA Inference Microservice), which is optimized for inference with over 20 of the most popular AI models from NVIDIA and their ecosystem of partners. In terms of performance, NIM delivers pre-built containers based on NVIDIA’s inference software, including the Triton Inference Server and TensorRT-LLM, enabling developers to reduce deployment time from weeks down to minutes. 6195d9c6-a397-4922-a42b-ebe8d16fba99.png

Users can now leverage NVIDIA Accelerated Software Development Kits, libraries, and tools as NVIDIA CUDA-X microservices for retrieval-augmented generation (RAG), guardrails, data processing, HPC, and more. The CUDA-X microservices provide end-to-end building blocks for data preparation, customization, and training, enabling industries to accelerate the development of production-grade AI.

NVIDIA also announced the availability of 20+ healthcare NVIDIA Medical Imaging and CUDA-X microservices. "These curated microservices add another layer to NVIDIA's full-stack computing platform, bridging the AI ecosystem of model creators, platform providers, and enterprises so they can run customized AI models optimized for the NVIDIA CUDA installed base—billions of GPUs in the cloud, data center, workstations, and PCs," said Jensen Huang.

The NVIDIA ecosystem also includes data, infrastructure, and compute platform providers that are using NVIDIA microservices to bring generative AI to enterprises, along with leading application providers.

Top data platform providers, including Box, Cloudera, Cohesity, Datastax, Dropbox, and NetApp are using NVIDIA microservices to help customers optimize RAG pipelines and integrate proprietary data into generative AI applications.

Universal Foundation Model for Humanoid Robots Unveiled

The topic of humanoid robotics was also a highlight of Huang's keynote. "We can expect humanoids to play a large role in our world. The way that we set up workstations, manufacturing, and logistics, was not designed for humans. And so, these can be deployed more effectively," he said.

Huang unveiled Project GR00T, a Universal Foundation Model for Humanoid Robots, and announced the Jetson Thor, a new humanoid edge computer powered by the NVIDIA Thor system-on-a-chip (SoC), and major updates to the NVIDIA Isaac robotics platform during his keynote. f28eadd8-e482-4c21-9bfc-6dbf1329eaa9.png

NVIDIA's Latest AI and Robotics Announcements

NVIDIA's new Isaac tools, including Isaac Lab, are designed to create foundational models for robots in any environment. Isaac Lab is a GPU-accelerated, lightweight application optimized for running thousands of parallel simulations for robotics training. OSMO coordinates data generation, model training, and hardware-in-the-loop workflows in a distributed environment.

NVIDIA's latest Jetson Thor platform can perform complex tasks, interact naturally with humans and machines, and has a modular architecture optimized for performance, power efficiency, and size.

NVIDIA is also developing an AI platform for humanoid robots in partnership with 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary.AI, Unitree Robotics, and Xiaopeng Robotics.

NVIDIA DRIVE Thor Unveiled at GTC 2023

NVIDIA announced that leading companies in transportation are adopting the NVIDIA DRIVE Thor centralized vehicle computing platform, including electric vehicle (EV) makers, truck makers, robotaxi and robo-delivery companies, and autonomous bus manufacturers.

DRIVE Thor integrates cockpit functionality, safe and secure highly automated and autonomous driving capabilities onto a single, centralized platform. The next-generation AV platform features the NVIDIA Blackwell architecture, designed specifically for transformer, large language model (LLM), and generative AI workloads.

BYD, GAC Aion, XPeng, Li Auto, and Zeekr have announced that they will build their next-generation vehicles on DRIVE Thor. Plus, Waabi, WeRide, and Nuro will use DRIVE Thor for innovation and validation. DRIVE Thor is expected to be in production as early as next year.

NVIDIA Introduces High-Performance Networking Switches

NVIDIA also announced the X800 series, a new family of networking switches purpose-built for massively scaled AI. The NVIDIA Quantum-X800 InfiniBand network and the NVIDIA Spectrum-X800 Ethernet network are the world's first 800Gb/s end-to-end throughput networking platforms, taking network performance for compute and AI workloads to new heights. b5f77df9-640e-4fa2-b957-112d0dd8c9c0.jpg

In this context, we observe NVIDIA's rapid iteration in the computing power domain, driving the advancement of AI from the computing power and hardware end. Of course, in addition to the computing power release, GTC2024 also brings many surprises at the application and ecosystem layers.

Institutional人士 believe that the 2024 NVIDIA GTC conference was unexpectedly popular, indicating that the trend of AI commercial落地 may accelerate, while computing power infrastructure is the foundation for the continuous落地 of AI applications. NVIDIA's supply chain and other AI computing power and application-related companies will迎的发展opportunity.

The future is here, and in 2024 NVIDIA has launched the more powerful AI processor Blackwell, and a new round of AI competition will begin.

s_39f10a1d079e4bbeade8837f981b366f.png