HBM is a stack of up to twelve DRAM chips that are interconnected using over one thousand TSVs – Through-Silicon Vias. These are metal-filled holes etched right through the DRAM die to allow signals to move vertically through the chip. It’s an alternative to more conventional wire bonding.
HBM sells for significantly more than standard DDR DRAM.
Because of their expense HBMs ae rarely used except in GPUs, Graphics Processing Units, like the one in this graphic’s photo (click to enlarge). That photo shows an NVIDIA Ampere GPU with six silver HBMs surrounding it – three along the top and three along the bottom. That silver is the silicon on the back of the top DRAM chip in each HBM stack.
The expense mainly comes from the use of TSVs. It costs a lot to etch those holes through the wafer. Deep holes take a long time to etch, and, when it comes to semiconductor processing tools, time is money. If you spend a few million dollars on a tool then you will want for it to process as many wafers per hour as possible, so you can defray its cost over the largest-possible number of chips. Etching holes through a wafer is slow, and that undermines this effort.
Fortunately, HBMs use back-ground wafers that have been thinned so that the HBM stack doesn’t get too thick. Wafer thinning is inexpensive and is widely used, especially in NAND flash, where chips are stacked as many as sixteen high. It takes less time to etch a hole through a thin wafer than through a thick one, but it’s still a slow process.
The Memory Guy understands that it costs about $500 to add TSVs to a DRAM wafer. Given that a processed wafer only costs about three times that much to begin with, that’s a pretty costly adder.
Add to this fact that HBMs are used in low volumes, and you have a recipe for a pretty pricey product. There’s a little bit of the “Chicken and Egg” story in this: If HBMs were cheaper they would be used in higher volumes, but the cost of an HBM is high because it doesn’t ship in high volumes.
So what’s all this have to do with Intel? Well, on July 27 Intel announced that the company will be using a new power distribution technology on its processors. This technology, called PowerVia, will use the standard metal interconnect layers on the top of the chip only for signals, and the power will all be brought to the chip from a new set of metal layers on the back of the wafer. TSVs will be used to bring the power from the new metal layers to the active circuit elements.
It’s described in more detail in an Intel YouTube video. (Heaven knows why, but Intel decided to sell advertising on its YouTube channel! You will have to put up with that if you watch the video.)
Intel’s use of TSVs will increase the number of wafers that are processed with TSVs, and that will help to lower TSV costs, in turn helping to reduce the cost of HBM. If HBM sells at a lower price, then it will be accepted into more applications and the cost will drop even further. It’s a snowball effect.
If you want to gain a better understanding of how memory market dynamics work, or learn how to tell which technologies will succeed and which will languish, Objective Analysis would like to help you. Please contact us to explore ways that we can work to benefit your company’s strategy.