Micron Technology has introduced a 32GB NVDIMM-N. Perhaps the most important thing about this device is not so much its high density as the fact that it runs at higher bus speeds than competing NVDIMMs, doing 2933 megatransfers per second (MT/s), a speed that Micron representatives tell us is required to support Intel’s Skylake processor.
Up to this point NVDIMM-Ns have been limited to 2400 MT/s, which is fast enough for Broadwell, but which misses the mark for Skylake. Design is tricky even at this slower speed, requiring a number of expensive high-speed multiplexers in the DRAM’s critical speed path.
“Multiplexers?” Yes, NVDIMMs use them, even though no other kind of DIMM does. The Memory Guy can explain why, having just finished a report covering the NVDIMM market and technology.
Here’s a little refresher for those who either don’t remember or never knew that NVDIMM-N requires multiplexers. The NVDIMM-N looks to the system like a standard RDIMM, operating at DRAM speeds through a standard DDR4 interface. If the power fails then a NAND flash chip grabs a copy of the DRAM’s contents while the rest of the system fades away. This takes some extra hardware that’s roughly illustrated in the image below.
An external supercapacitor provides the energy that allows NVDIMM to perform its backup sequence while the rest of the system is shutting down. A controller has the job of moving the DRAM’s data into the NAND flash after a power failure, and of restoring the data from the NAND back into the DRAM once power is regained.
But the DRAM is connected to the server’s memory bus, and that connection must be removed before the NAND and controller chips can talk to the DRAM. Micron calls this “Bus Isolation” and it’s the multiplexers’ job. The multiplexer is actually a number of chips that will connect the DRAM either to the memory bus or to the NAND flash chip, but never to both. It’s a lot like a switch track on a train (thus the first image on this blog post), and it has to be really fast, because any delay in the multiplexer will slow down the DRAM. The multiplexer is right in the middle of the server’s most critical speed path.
Micron has taken a novel approach to this problem by putting the multiplexers inside the DRAM chips themselves. Since the multiplexers are on-chip, they will add almost no delay to the DRAM’s access time, and this is how Micron can support 2933 MT/s. Micron’s representatives told us that the addition of the multiplexers has a negligible, if any, impact on the die size. From the perspective of the logic requirements this is understandable (multiplexers are really tiny), but these multiplexers will require the DRAM to have twice as many address and data bonding pads, and this could cause the die size to increase, driving costs higher. With this in mind The Memory Guy called a chip layout expert who informed me that the die is probably center-bonded, and that most DRAMs have so few I/Os that the number of bonding pads in this center strip could easily double without impacting the die size at all.
Micron told me that this special DRAM chip is same one that the company will use for all of its standard DRAM products, including discrete DRAM chips and DIMMs, so its unit volume will be high, driving costs down through the economies of scale. Custom DRAMs that are produced in lower volumes are never cost-effective.
One thing that I am forced to wonder, though, is if the multiplexers were added to the chip solely in support of the NVDIMM-N. Our forecast for NVDIMM-Ns is not all that high, and the effort to add multiplexers onto a DRAM chip would have been costly enough that it probably wouldn’t have been done only to support fast NVDIMMs. Instead I would guess that this DRAM may have been designed to support a higher-volume application that probably also requires a multiplexed DRAM, for example the Intel Optane DIMM and Micron’s Quantx counterpart.
I am sure that time will tell.