Home > News > Hardware

South Koreans Stalling Nvidia's Progress

Ba Jie Wed, Apr 03 2024 08:47 AM EST

Earlier this morning, Xiaofa sent me a major news update: Huang has made a move, acquiring Xbox from Microsoft. The next generation of Xbox will feature dual graphics cards and AI enhancement, aiming to outperform the neighboring PS6. Scabe8874-829a-4554-9090-3975c15ed647.png Unfortunately, this is just a poor April Fools' joke! As I always say, old Huang, the AI arms dealer, has long treated gamers as cash cows. There's no way he'd spend a fortune on gaming again.

Remember the recent GTC conference? Old Huang's vibe when he unveiled the B200 was way better than hyping up some "Broken 4090".

People lining up to buy Nvidia GPUs here stretch all the way to France!

But, from the looks of it, old Huang might not be as relaxed as he seems. At least, he's got the Koreans breathing down his neck, according to what I've heard from the tech world. S5ccaed28-2819-43bc-9bca-f878ec3f5056.png In the latest release of the B200 chipset by Lao Huang, there are as many as 4 stacks of the latest HBM (High Bandwidth Memory) memory chips: HBM3e. Currently, this technology is essentially only mass-produced by Koreans on this planet. S770e3b49-f23f-472f-9c6c-4ed2c50cebeb.png According to Trendforce data, SK Hynix and Samsung from South Korea dominated 90% of the global HBM memory production capacity in 2023. As for the AI chips like A100, A800, and others on the market for the past two years, practically none of them could function without HBM memory. S4ee76df6-09ff-46f2-bfb2-fe0126c03dc7.png So, while NVIDIA's market cap soared past a trillion when Huang was still at the helm, SK Hynix's stock quietly doubled last year and now its market cap has surpassed a hundred billion. Sf94f431b-b05a-4fdf-b3f7-e04ab55ee14f.png Even Samsung, who had made some wrong route choices before, saw its stock price surge to nearly a two-year high.

Perhaps the significance of HBM memory technology hasn't fully sunk in yet. Let's put it this way: South Korea has already designated this technology as a national strategic asset earlier this year. They aim to maintain their leading position by offering tax incentives to SK Hynix and Samsung, among other measures. S7ff6e838-f08c-46e4-9bc4-20c717434590.png Even before the latest release of the B200 AI chip, Old Huang had already given deposits of several billion RMB to SK Hynix and others, securing their entire production capacity for the year. Keep in mind, SK Hynix just achieved mass production and delivery of HBM3e memory at the end of last month, which is much more substantial than the bulk order from Xiaomi for the SU7 next door, isn't it? Sfb43d5ad-4199-4799-a4fb-f974ec252713.png Samsung couldn't resist the lucrative business and hastily released their own HBM3e samples in February, rushing to distribute them to clients worldwide for inspection. S36741cce-0041-447d-af6a-7dc953adae4b.png So, why is HBM memory so popular? Well, it all boils down to the long-standing bottleneck issue with memory technology. In the age of information technology, whether it's gaming or work, the speed of a computer system relies heavily on the cooperation between the processor and the memory. In theory, if these two speeds are close, they make the perfect team. ce9a16c3-e608-4909-a250-8c71cdb88365.jpg In the past, processors have seen exponential growth in performance, but the transfer speed between memory and hardware has lagged behind. Over the last 20 years, peak computing power has increased by 90,000 times, while memory and hardware interconnection bandwidth has only increased by 30 times. To draw an analogy, the processor is like the head chef, while memory is the assistant preparing side dishes. No matter how skilled the chef is, if the assistant can't keep up with the prep work, serving dishes will still be slow. Therefore, in recent years, memory has become a bottleneck for computer performance, with some even referring to this phenomenon as the "memory wall". S430d13a8-133f-4350-a09a-3334425fe52c.png For us gamers, these questions are somewhat acceptable, considering there are still plenty of folks using 1066 to run AAA titles nowadays. But what's really keeping them busy is the AI explosion of the past couple of years. Because the foundation of AI big models relies heavily on massive data and computational power, breaking through the "memory wall" is essential.

In other words, gaming for the little ones is like ordering a simple meal, while next door, AI just casually trains and it's like ordering a lavish feast. So for the AI giants, the memory wall is like locking their potential intelligence levels, and HBM is the breaker of that wall.

Unlike the traditional "flat design" used by DDR memory, the breaker HBM adopts a "high-rise design" for a dimensional strike. S17c1a0c4-00f9-4532-9b3b-ccd8996a9765.png Building skyscrapers in chips relies heavily on Through-Silicon Via (TSV) technology, an advanced packaging technique. Essentially, it involves stacking different chips together, creating a bunch of holes in between, and connecting them using conductive materials like copper tubes. Sa088da4f-1b62-4029-8617-baf3137b2e64.png Reducing data transmission distances and footprint has significant physical implications. This leads to benefits such as decreased signal latency, lower power consumption in chips, and increased bandwidth, among others. As process technology advances, Through-Silicon Vias (TSVs) can be made smaller and denser, allowing for more layers of stacked dies, which further enhances bandwidth, transmission speed, and maximum capacity. For instance, Samsung's latest product boasts HBM3e technology with a staggering 12-layer stack and a capacity of up to 36 GB. S7007b1c9-5ce0-4b45-8e1f-cb7bc2d31d9f.png When it comes down to it, the HBM technology isn't as simple as it sounds. From materials and design to packaging and cooling, there are numerous challenges involved. However, most of these technical hurdles have been overcome by South Korea's SK Hynix. Therefore, advancements such as HBM3 and HBM3e have been pioneered and mass-produced by SK Hynix. With HBM3, we've already reached a remarkable 1024-bit data path, operating at an astounding speed of 6.4 Gb/s, delivering a bandwidth of up to 819 Gb/s. S64b5cbd3-78b0-44ea-a0b6-90e129d2974b.png The latest HBM3e is actually the fifth generation product of the HBM memory technology family. Its highest data processing speed has reached 1.18 terabytes per second, equivalent to processing over 200 full high-definition (FHD) movies in just one second.

Despite there being SK Hynix, Samsung, and Micron in the HBM race, SK Hynix has always been leading the pack in terms of development time, technological breakthroughs, and production speed. They alone occupy about half of the market share.

Following behind is Samsung, also a South Korean company, which has formed two HBM teams this year in an effort to catch up with the progress.

As for Micron, the American company has skipped the development of HBM3 and directly succeeded in researching HBM3e. Despite boasting about it and even becoming one of NVIDIA's suppliers, the market still needs to validate such bold claims. Sa198d9bf-cf08-4924-8be9-1b0e2cb833c0.png So, breaking it down, NVIDIA's AI arms dealer plan really relies on these HBM suppliers. Even though everyone's talking about the hard currency in the AI world being H100 or B200. Who would've thought, in the end, it would actually be South Korea holding the cards? - Author: Bajie Editor: Jiang Jiang & Noodle Cover: Huan Yan

Image and data sources: Pacific Securities: AI Server Catalyzes HBM Demand Surge, Core Process Changes Bring Supply Side Increment - Computing Power Series Report (Part 1) Oriental Fortune Securities: Glass Through Via (TGV) Effectively Supplements Silicon Through Via (TSV), AI+Chiplet Trend Holds Great Potential China Post Securities: Advanced Packaging Key Technology - TSV Research Framework Micron: Micron Commences Volume Production of Industry-Leading HBM3E Solution to Accelerate the Growth of AI SK hynix: SK hynix Develops World's Best Performing HBM3E, Provides Samples to Customer for Performance Evaluation Samsung: Samsung Develops Industry-First 36GB HBM3E 12H DRAM NVIDIA: NVIDIA DGX B200 Semiconductor Industry Chronicles: HBM3E Takes Off, the Charge of the Vanguard Has Already Sounded Wall Street News: Goldman Sachs on HBM: Market Growth Tenfold in Four Years, Samsung Expected to Occupy Over Half of the Market Share, Micron May Catch Up Later