Intel has recently announced a technology that the company calls PowerVia that could inadvertently help reduce the cost of HBM – high-bandwidth memory.
HBM is a stack of up to twelve DRAM chips that are interconnected using over one thousand TSVs – Through-Silicon Vias. These are metal-filled holes etched right through the DRAM die to allow signals to move vertically through the chip. It’s an alternative to more conventional wire bonding.
HBM sells for significantly more than standard DDR DRAM.
Because of their expense HBMs ae rarely used except in GPUs, Graphics Processing Units, like the one in this graphic’s photo (click to enlarge). That photo shows an NVIDIA Ampere GPU with six silver HBMs surrounding it – three along the top and three along the bottom. That silver is the silicon on the back of the top DRAM chip in each HBM stack.
The expense mainly comes from the use of TSVs. It costs a lot to etch those holes through the wafer. Deep holes take a long time to etch, and, when it comes to semiconductor processing tools, time is money. If you spend a few million dollars on a tool then you will want for it to process as many wafers per hour as possible, so you can defray its cost over the largest-possible number of chips. Etching holes through a wafer is slow, and that undermines this effort.
Fortunately, HBMs use back-ground wafers that have been thinned so that the HBM stack doesn’t get too thick. Wafer thinning is inexpensive and is widely used, especially in NAND flash, where chips are stacked as many as sixteen high. It takes less time to etch a hole through a thin wafer than through a thick one, but it’s still a slow process.
The Memory Guy understands that it costs about $500 to add TSVs to a DRAM wafer. Given that a processed wafer only costs about three times that much to begin with, that’s a pretty costly adder.
Add to this fact that HBMs are used in low volumes, and you have a recipe for a pretty pricey product. There’s a little bit of the “Chicken and Egg” story in this: If HBMs were cheaper they would be used in higher volumes, but the cost of an HBM is high because it doesn’t ship in high volumes.
So what’s all this have to do with Intel? Well, on July 27 Intel announced that the company will be using a new power distribution technology on its processors. This technology, called PowerVia, will use the standard metal interconnect layers on the top of the chip only for signals, and the power will all be brought to the chip from a new set of metal layers on the back of the wafer. TSVs will be used to bring the power from the new metal layers to the active circuit elements.
It’s described in more detail in an Intel YouTube video. (Heaven knows why, but Intel decided to sell advertising on its YouTube channel! You will have to put up with that if you watch the video.)
Intel’s use of TSVs will increase the number of wafers that are processed with TSVs, and that will help to lower TSV costs, in turn helping to reduce the cost of HBM. If HBM sells at a lower price, then it will be accepted into more applications and the cost will drop even further. It’s a snowball effect.
If you want to gain a better understanding of how memory market dynamics work, or learn how to tell which technologies will succeed and which will languish, Objective Analysis would like to help you. Please contact us to explore ways that we can work to benefit your company’s strategy.
4 thoughts on “Could Intel’s PowerVia Lower HBM Costs?”
TSVs of various kinds are used in different products, and so is thinning. The sensors in your cellphone camera have two wafers face bonded (not TSV) with extreme thinning for the optical wells. NAND packaging commonly has extreme thinning as well, though they are not bonded together in any way. Samsung has higher density, thinner forms of TSV recently introduced for HBM3. None of these actually use the same equipment and process as Intel Power VIA or the equivalent TSMC process. Nor is it the same as TSMC’s high density Via scheme used by AMD in the Milan processor, although that does include power but it uses much finer Vias than HBM and far more (about 24,000 per cache chip, which is about 1/3rd the size of an HBM chip which has around 11,000).
So the upshot is there is general progress in etching deep holes (look up the Bosch process, very clever) and this is now routinely available. It is slow but the machines are not that expensive. It is not nearly as slow for vias as when used for 3D NAND, where billions of holes are being etched not just thousands, and the walls need to have about an 89.8 degree vertical, while about 85 will work for vias.
The thinning is also becoming widespread, cheaper, and more controlled. It is true that HBM3 generation of vias should be more cost effective than today but the impetus is industry-wide on many different kinds of products and resulting in better tools now widely available. Intel is just part of the wave. The work they do will be much more complex due to chemistry, depth control, and alignment challenges which are far more stringent than for TSVs. A good discussion of the reasons for Intel, TSMC, and Samsung all to be planning backside power can be found here:
Done right, it will reduce the losses in power distribution, allow lower voltages to be used, and free up the top side for better wiring paths.
Thanks for a very thorough and well-informed comment.
I guess that, in the end, costs will come down as a function of the number of wafers processed, so with AMD, and Intel, and Samsung all providing backside power the learning should increase and tool cost should decrease. Even though, as you point out, the tools are different for backside power as for TSVs, they should be sufficiently similar that TSV processing will benefit from backside power techniques, and vice versa.
You are a font of great information!
Agreed. TSV and backside power are convergent.
Comments are closed.