The affected product is Samsung's HBM3 chip, which is currently the most commonly used fourth-generation HBM standard in artificial intelligence graphics processing units.
HBM3 offers higher bandwidth and lower latency. By stacking HBM3 chips together and transferring data through short-distance, high-density interconnect channels, the bandwidth can reach levels of several hundred gigabytes per second.
HBM3 opens the door to faster data movement between memory and processors, reducing the power needed for sending and receiving signals, and enhancing system performance that requires high data throughput.
However, achieving all this is not easy. Manufacturing this technology and fully utilizing it face significant challenges, with the primary challenge being heat control. The HBM structure accumulates a lot of heat, and packaging DRAM with the GPU exacerbates this situation, posing more challenges for heat dissipation. This forces manufacturers to make choices between latency and cooling, which inevitably increases the overall cost from either perspective.