With the release of its Cascade Lake family of processors today (formally called the “2nd Generation Intel Xeon Scalable processor”) Intel disclosed more details about its Optane DIMM, which has been officially named the “Intel Optane DC Persistent Memory.” This DIMM’s architecture is surprisingly similar to an SSD, even to the point of its having error correction and encryption!
The Memory Guy doesn’t generally cover SSDs, but I do cover DIMMs, so this is one of those posts that I could have put into either of my blogs: The Memory Guy or The SSD Guy. I have decided to put it here with the hopes that it will be easier for members of the memory community to find.
The internal error correction, the encryption, and the fact that 3D XPoint Memory wears out and must use wear leveling, all cause the Optane DIMM’s critical timing path to be slower than the critical path in a DRAM DIMM, rendering the Optane DIMM unsuitable for code execution. This, and the fact that XPoint writes are slower than its reads, all help to explain why an Optane DIMM is never used as the only memory in a system: there is always a DRAM alongside the Optane DIMM to provide faster access to a subset of the Optane DIMM’s data that is cached within the DRAM.
Intel supports two usage modes for the Optane DIMM, called “Memory Mode” and “App Direct Mode,” to manage the two forms of memory, and they are explained in a series of posts in The SSD Guy blog.
Here’s what’s on the module:
- 3D XPoint Memory (of course!) This is in the form of 11 chips: 8 for data, and the others for ECC and “Spare” (most likely overprovisioning.) Intel is calling this “Optane Media.” It appears that the company has stopped using the “3D XPoint Memory” name.
- Controller. The controller on the Optane DIMM performs all of the housekeeping on the module, similar to the functions of an SSD controller, including:
- Wear leveling
- Managing data into and out of the 3D XPoint Memory
- DDR interface protocol
- Error correction
- Data encryption
- Power-fail backup
- Power & thermal control
- AIT DRAM (Address Indirection Table). Wear leveling requires for addresses to be translated via a table. In an SSD this is done either with an SRAM internal to the controller or with an external DRAM.
- Backup Capacitors. The DRAM above needs to be saved into some of the spare 3D XPoint memory so that data can be found after a power failure. This is true in an SSD too. These capacitors keep the Optane DIMM running as it performs its backup routine. Since 3D XPoint writes are really fast compared to NAND flash and consume significantly less power, these capacitors are probably smaller than those required on a NAND SSD.
- Data buffers. The Optane DIMM has buffers on the data lines but not the address lines.
- PMIC (Power Management IC). This chip manages the power for the media and the controller. Intel says that it “generates the rails” which implies that there may be special voltages.
- SPI flash (Serial Peripheral Interface). This stores the firmware for the controller.
There are a few differences between the Optane DIMM and other devices we might compare it to. For example, the Optane DIMM has an SPI NOR flash chip. This is a low pin count chip that communicates serially with the controller. Why would Optane have this while a NAND SSD might not?
In most SSDs the controller’s firmware is stored in the NAND flash. The Optane DIMM stores its firmware in this NOR flash chip instead of the 3D XPoint Memory. This could have something to do with the soldering process, which has always been tricky for the phase change memory (PCM) technology that is the basis for the 3D XPoint Memory. PCM can lose its programming if temperatures stay too high for too long. While this is also true for NOR flash, it takes more extreme conditions for NOR to lose its bits.
Another difference between the Optane DIMM and an NVDIMM-N is the number of high-speed dedicated buffer chips each uses. In an NVDIMM-N both data and address are buffered to isolate the on-chip DRAM and the nonvolatile memory from the chaos that’s happens on the processor’s memory channel during a power failure. These buffers are extremely fast so that they won’t add significant latency to the critical timing path. The Optane DIMM only has buffers on the data pins, and Intel tells us that they are required for “High bit rate signal integrity.” The address and data signals on the Optane DIMM don’t go through high-speed buffers because they must pass through the controller for wear leveling, ECC, and encryption, so they use the controller chip to do the buffering. The controller chip is naturally much slower than dedicated buffers would be, but the signals must pass through the controller anyway, so there’s no reason to use these expensive buffers.
Architecturally, though, this module is very similar to an SSD. I have not been briefed yet on Intel’s new “DDR-T” communication protocol for the Optane DIMM, but once I have learned how it works I plan to share that in a follow-up blog post.
Do we know the sizes of the DQ buffers? The look of the chip suggests it might be in few orders of MBs.
Vikram,
I believe that these buffers are just used to strengthen the current on the DQ Lines. Something like IDT’s 4DB0124K:
https://www.idt.com/products/memory-logic/memory-interface-products/ddr4-solutions/4db0124k-ddr4-data-buffer
If you visit this page and then click on the link on the upper right that says “DOWNLOAD OVERVIEW” you get a PDF that shows a picture of a DIMM with four of these buffers in a similar position to the ones on the Optane DIMM.
It’s unfortunate that the term “Buffer” can mean either a current amplifier like this, or some storage. That’s confusing!
Thanks for the comment,
Jim
I think that’s just a FPGA as a controller and that’s what the NOR is for. Just look at the substrate below the “controller”. A real controller ASIC would not need that kind of IO with so much routing area. It would be directly on the DIMM-PCB. Intel doesn’t want to invest too much in these early versions of Optane.
And yes,this is just a faster SSD.That’s just too much marketing.