Home > News > AI

AI demand is strong, with NVIDIA just a step away from Apple.

Mon, May 27 2024 08:01 AM EST
?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2Fb5a2ef0aj00sdzlrz0112d200u000m9g00id00dm.jpg&thumbnail=660x2147483647&quality=80&type=jpg Nvidia's market value approaches $2.6 trillion.

By Zhou Yongliang, Edited by Zheng Xuan

Nvidia's first-quarter results far exceeded market expectations, proving that the AI wave is still strong.

According to the financial report, Nvidia's total revenue for the first quarter of the 2025 fiscal year (first quarter of 2024) was $26 billion, a 262% year-on-year increase; data center revenue was $22.563 billion, a 427% year-on-year increase. These figures have hit record highs for several consecutive quarters, far surpassing Wall Street's expectations.

Nvidia CEO Jensen Huang stated in the financial report that "the next industrial revolution has already begun." He mentioned that countries and numerous companies are collaborating with Nvidia to transform traditional data centers into "AI factories," producing a new product called "artificial intelligence." Huang believes that AI will bring significant productivity and revenue growth to various industries, as well as significant improvements in cost and energy efficiency.

These positive news temporarily eased concerns about a slowdown due to the frenzy of data center equipment spending over the past year. After the financial report was released, Nvidia's stock price rose by 9.32%, surpassing $1,000, with a market value of $2.55 trillion, trailing Apple ($2.87 trillion) by only $320 billion. Nvidia's stock price has risen by 109.6% so far this year.

01

Data Centers Still Going Strong

Nvidia's business is divided into four main segments: data centers, gaming, professional visualization, and automotive.

Among them, the data center business is the most watched part and the core driver of Nvidia's growth. In the fourth quarter of the 2024 fiscal year, Nvidia's data center business revenue reached $18.4 billion, more than five times that of the same period the previous year, setting a new high for the previous quarter. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F692a756cj00sdzls00037d200u000f8g00id009b.jpg&thumbnail=660x2147483647&quality=80&type=jpg NVIDIA Key Financial Indicators for Q1 FY2025 | Image Source: Financial Report Screenshot

In the latest data, NVIDIA's data center business continues to show strong performance. In the first quarter of FY2025, NVIDIA's total revenue was $26 billion, with the data center business contributing $22.6 billion, a year-over-year increase of 427% and a sequential increase of 23%. Data center revenue reached a historic high, further increasing its share to 86.9%.

NVIDIA's CFO, Colette Kress, attributed the growth of the data center business to increased shipments of Hopper architecture GPUs (such as the H100). Compared to last year, compute revenue increased by over 5 times, and networking revenue increased by over 3 times.

In this financial report, NVIDIA disclosed detailed revenue breakdown of the data center business for the first time. Compute revenue reached $19.392 billion, a year-over-year increase of 478%; networking revenue was $3.171 billion, a year-over-year increase of 242%. Compute revenue mainly comes from the Hopper platform, while the growth in networking revenue is attributed to the strong performance of the InfiniBand end-to-end solution.

It is worth noting that currently, major cloud service providers such as Amazon, Meta, Microsoft, and Google, account for approximately 40% of NVIDIA's data center revenue. Additionally, many leading Large Language Model (LLM) companies, such as OpenAI, The Depth, Anthropic, Character AI, Cohere, Databricks, DeepMind, Meta, Mistral, and xAI, are leveraging cloud services to build NVIDIA AI.

Of course, NVIDIA aims for business diversification beyond these major clients. Jensen Huang pointed out in the financial report that artificial intelligence is expanding to various sectors including governments, consumer internet companies, automotive manufacturers, and healthcare clients. These new areas may create multiple vertical markets worth billions of dollars outside of cloud service providers. By the end of the first quarter, NVIDIA had collaborated with over 100 customers, establishing AI factories ranging from hundreds to tens of thousands of GPUs, some even reaching 100,000 GPUs.

Notably, the automotive and consumer internet sectors have shown outstanding performance. NVIDIA's CFO, Colette Kress, revealed that Tesla purchased 35,000 NVIDIA H100 GPUs for AI training, used in Tesla's latest Full Self-Driving (FSD) V12 system. Kress stated that this year, the automotive industry will become the largest vertical market in NVIDIA's data center business, bringing opportunities for billions of dollars in revenue. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F333af1b2j00sdzls0002od200u000k1g00id00c9.jpg&thumbnail=660x2147483647&quality=80&type=jpg Tesla's Autopilot System | Image Source: Visual China

Another major highlight is Meta's release of the latest large language model, Llama 3. This model was trained on 24,000 Nvidia H100 GPUs and provides support for Meta's AI system, Meta AI, powering Facebook, Instagram, WhatsApp, and Messenger. Llama 3 not only enhances the AI capabilities of these platforms but also sparks a wave of AI development across various industries.

Surprisingly, in the past year, large models have accounted for 40% of Nvidia's data center revenue. This indicates that large models have indeed brought significant business and performance growth in many practical applications.

Previously, several cloud computing industry experts mentioned that last year, Nvidia GPUs were mainly purchased for training large models. With the widespread application of large models in real scenarios, more computing power is now being used for inference.

Apart from the data center business, Nvidia's gaming business achieved $2.6 billion in revenue in the first quarter, an 18% year-over-year growth. In comparison, the impact of automotive chips and workstation chips was relatively limited, with sales of $427 million and $329 million in the first quarter, respectively.

Thanks to the rapid growth of the data center business, Nvidia's financial performance in the first quarter of the 2025 fiscal year was outstanding, demonstrating strong growth momentum and profitability. In the first quarter of the 2025 fiscal year, Nvidia's revenue reached $26 billion, an 18% increase from the previous quarter, a staggering 262% year-over-year increase, significantly surpassing analysts' expectations of $24.65 billion.

Nvidia's GAAP net profit was $14.881 billion, a 628% year-over-year surge, a 21% increase from the previous quarter; non-GAAP net profit reached $15.238 billion, a 462% year-over-year growth, a 19% increase from the previous quarter, with earnings per share of $6.12. The gross margin increased from 76.7% in the previous quarter to 78.9%. These figures indicate that Nvidia's rapid growth and market competitiveness in the data center field have significantly strengthened, and its financial position is very healthy.

02

Significant Decline in China Business

Nvidia not only achieved revenue beyond expectations in the current quarter but also laid a solid foundation for future stable growth. According to their second-quarter guidance, Nvidia expects revenue to reach $28 billion, with a projected GAAP gross margin of 74.8%, and an annual gross margin expected to stabilize at around 70%.

This outlook exceeds the market's general expectations and temporarily alleviates previous concerns about insufficient demand for artificial intelligence. Major tech companies like Microsoft, Google, Amazon, and Meta reported in their first-quarter financial statements that their capital investment in cloud computing this year reached a staggering $177 billion, far exceeding last year's $119 billion. It is projected that by 2025, this number will increase to $195 billion. These substantial investments are expected to continue driving Nvidia's continuous growth in data center revenue and profits. ?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0524%2F5ea43796j00sdzls1001ad200u000jzg00id00c8.jpg&thumbnail=660x2147483647&quality=80&type=jpg At the 2024 GTC Summit, Jensen Huang unveiled the new Blackwell GPU chip. Apart from performance guidance, Nvidia's latest progress with the Blackwell series chips has garnered significant attention. The transition from the Grace Hopper chip to the Blackwell chip has raised concerns in the market about the demand shift from Hopper and H100 products. It has been reported that Amazon Web Services (AWS) has paused orders for the NVIDIA Grace Hopper solution and is now awaiting the launch of the more powerful Grace Blackwell super chip. AWS mentioned that they have not completely halted orders for NVIDIA's cutting-edge chips but are adjusting them for specific projects, such as the jointly developed Project Ceiba supercomputer with NVIDIA.

Huang Renxun stated that the Blackwell chip has entered the production phase, with shipments expected to begin in the second quarter and accelerate in the third quarter. Customers are set to complete data center deployments in the fourth quarter, leading to significant revenue growth for Nvidia, surpassing external expectations.

He emphasized that despite the market transitioning towards H200 and Blackwell, the demand for Hopper and H100 products remains strong. Customers are eager to deploy new infrastructure quickly to enhance efficiency and increase revenue, driving the continuous demand for flagship AI training models.

During the earnings call, Huang Renxun mentioned Nvidia's ten-year roadmap for technological development to address the intense competition from GPU and custom ASICs. They are focused on advancing NVlink, InfiniBand, and Ethernet computing architectures. Following Blackwell, Nvidia plans to release a new product annually. They have also outlined the development roadmap for Ethernet, with the imminent launch of the new network technology Spectrum X by Dell. Additionally, Nvidia will introduce the InfiniBand computing architecture.

All three computing architectures run CUDA and its entire software stack, providing users with faster processing speeds and more choices for cloud and data centers. Nvidia's innovations not only enhance performance but also reduce the total cost of ownership (TCO). With their architecture, Nvidia is poised to lead a new wave of technological revolution.

Meanwhile, Nvidia's supply situation in the Chinese market has been closely monitored. Previously, the Chinese market accounted for 20% to 25% of Nvidia's data center revenue. However, since the latest chip export restrictions imposed by the U.S. in October last year, Nvidia's business in China has been significantly constrained. In the 2024 fiscal year, the proportion of Nvidia's revenue from China is only in the single digits (5%).

Earlier this year, Nvidia began offering a special version of the AI chip H20 compliant with U.S. export controls to Chinese customers. However, Nvidia executives acknowledged a "significant" decline in sales in China during the first quarter of this year. Huang Renxun anticipates fiercer competition in the Chinese market in the future, primarily due to technological restrictions intensifying local market competition. He stated that Nvidia will continue to make every effort to serve Chinese customers and the market, emphasizing "we will do our best."

Despite record financial performance, Nvidia also faces concerns. In chip manufacturing, Nvidia competes with other rivals like Google, Microsoft, AMD, Intel, and Broadcom for orders. It is understood that Google has been collaborating with Broadcom for years to produce its AI chips; Amazon announced the launch of new AI chips in November, while Microsoft also stated its plans to start producing custom AI chips in the same month.

On the other hand, as more general large models are trained, enterprises are shifting their focus to AI inference. For top internet and large model companies, this year's challenge lies in the deployment and monetization of large models. For other enterprises, the more critical issue is how to select suitable large models to integrate into production or business processes to create value. Nvidia needs to ensure that its "moat" can continue to provide a competitive advantage to address these challenges.