After a big 3D XPoint launch one year ago almost anyone would expect for Intel to have had a lot of exciting new news to share about the technology at last week’s Intel Developer Forum (IDF). Those who were watching for that, though, were in for a disappointment.
For readers who don’t remember, Intel and its partner, chipmaker Micron Technology, announced a new memory layer in July 2015 that would enable in-memory databases to expand well beyond the constraints posed by standard DRAM memory. The pair also boasted the additional benefit of being nonvolatile or persistent – data would not be lost if the power failed. This technology promised to open new horizons in the world of computing.
Intel devoted a lot of effort to promotion and education during the following month’s IDF, and even demonstrated a prototype 3D XPoint SSD that performed seven to eight times as fast as Intel’s highest-performance existing NAND flash SSD – the DC S3700. Although a DIMM form factor was disclosed, no prototypes were on hand. Both were given the brand name “Optane”.
Moving forward one year to the 2016 IDF (the source of this post’s odd graphic), The Memory Guy was shown two 3D XPoint SSDs and still no DIMMs. Intel personnel demonstrating the SSDs were only allowed to give the vaguest details about their performance and availability, and nothing at all about the anticipated price.
During the show Intel was willing to share a wider variety of benchmarks on those 3D XPoint SSDs, but not much more. The results were impressive: The Optane SSD’s latency was about seven microseconds compared to 85 microseconds for the flash SSD. Compare that to the 10 milliseconds typical of network storage access – it’s over one thousand times as fast! Technicians were able to use some software tuning to drop that even further to 1.47 microseconds.
Various benchmarks showed Optane benchmark improvements over SSD performance that ranged from three times to as high as 15 times.
There’s good news and bad news relating to Intel’s 3D XPoint presentations at IDF. Let’s look at the good news first.
The Optane SSD proves beyond doubt that Intel’s new memory technology can provide a significant performance boost over today’s NAND flash SSDs. This alone should attract a lot of attention from the user community.
Now for the bad news: This new technology will not be a factor in the market if Intel and Micron can’t make it, and last week’s IDF certainly gave little reason for optimism.
The SSDs Intel used for the demonstrations had a capacity of 140 gigabytes but Intel was careful to state that this was unlikely to be the capacity point of their commercial products, indicating that the actual SSDs would probably have higher capacities. The 3D XPoint Memory chip that Intel and Micron announced in July 2015 has a capacity of 128 gigabits, which translates to 16 gigabytes, so only nine 3D XPoint chips are needed to make a 140 gigabyte SSD.
Since Intel was only able to provide two demo SSDs (which were shuttled from the exhibit hall to the meeting rooms for the technical session that promoted them) you could argue that fewer than 20 working 3D XPoint samples have been manufactured to date.
This does not bode well for the product.
Why would Intel and Micron fall so far behind, especially since Micron projected in last year’s Summer Analyst Conference that its 3D XPoint gigabyte shipments by the end of 2017 would be within an order of magnitude of its DRAM gigabyte shipments? The simple answer is that 3D XPoint memory uses new non-silicon materials that are not well understood. Whenever some new material is added to a semiconductor manufacturing process all sorts of unforeseen issues can appear and cause significant delays.
Intel really needs for 3D XPoint Memory to work. Without it, the performance of future computing platforms won’t scale with processor upgrades. In other words, when a higher-performance processor is plugged into the system that system’s performance won’t improve because the rest of the system will bog the processor down. The new 3D XPoint Memory is the key to prevent this from happening. Without it, Intel will be unable to migrate customers to increasingly more powerful processors that sell for higher prices and reap higher margins for Intel.
This is a tough spot for both companies, and there are no indications of any pending breakthrough that will improve the situation. About all we can do is watch from the sidelines with the hopes that Intel and Micron will overcome their technical problems and get back on track.
I asked this around and dont get much meaningful answers. We have Samsung DDR4 128GB / DIMM. With DDR 5 and TSV DRAM, we could move to 512GB / DIMM. So when this happens, where to Xpoint Fits in?
Sure DDR5 and TSV DRAM is still few years away, but so is XPoint. And I dont buy adding another tier / layer of storage in between SSD and DRAM, when both have a clear roadmap of improvement.
So memory guy, any thoughts on this?
P.S – This is Server Scenario, which is what Xpoint is aiming at initially, i dont see Xpoint getting into consumer price point any time soon either.
Ed,
You’re a good critical thinker!
High-capacity DRAM modules reduce the appeal of 3D XPoint, you’re correct there. Since TSV DRAMs can make a server’s memory huge without loading the processor bus then it achieves one of 3D XPoint’s big advantages.
I disagree with you about adding another layer/tier between DRAM and NAND flash SSDs. If 3D XPoint (or some other technology) can meet the very specific criteria this layer needs (faster than flash, cheaper than DRAM) then it will very naturally create a new layer between DRAM & SSD. You are correct, though, about DRAM’s and SSD’s roadmaps. The only way that 3D XPoint will satisfy this need is if it continues to be cheaper than DRAM and faster than NAND flash, and that’s going to be a big challenge.
Another benefit that programmers want to achieve is for this layer to be persistent. They really want this feature in DRAM, but not enough to pay a higher price for it.
I agree with your “PS”. It’s probably going to be a long time before PCs use 3D XPoint.
Thanks for the comment,
Jim
Regarding the DRAM scaling future, Mutlu and others have documented severe scaling issues with DRAM, not the least being a refresh issue. According to his figures, a 64Gb DRAM die would cost 46% of system time just refreshing in order to maintain memory integrity. And this is not the only issue – for example the infamous row-hammer issue. – Here’s the link in which Mutlu discusses the entire issue. Net, net, while DRAM will be around for a long time the era of DRAM scaling is rapidly coming to an end.
https://users.ece.cmu.edu/~omutlu/pub/mutlu_memory-scaling_memcon13_talk.pdf
William,
Thanks for the link to Onur Mutlu’s exhaustive 159-slide presentation from MemCon 2013.
Note that while he does predict that a 64GB DRAM could use 46% of its time refreshing, that’s only if we continue to refresh the way we always have. His suggestion is to use a smarter approach to refresh, which makes sense, since Moore’s Law will not only make the transistors cheaper for DRAM, it will also make the transistors cheaper for the refresh circuitry. Slide 24 makes a very strong argument in favor of doing this. Only the WORST cells need to be refreshed as often as a normal 64ms refresh cycle, and the vast majority can go more than 256ms without being refreshed.
I wouldn’t write off DRAM scaling quite so soon. There are many very Hi-k materials that haven’t yet been put into play. These may make processing more difficult, but they pave the way to further capacitor shrinks.
This industry has a lot of amazingly brilliant inventors, and these people always astound me with their ability to take processes much farther than anyone thought possible only a few years ago.
Jim
Hi Jim:
Have you looked at HybriDIMM from Netlist. It is competitive to 3D XPoint memory and is in a DIMM package.
Any thoughts on this.
http://www.netlist.com
http://www.netlist.com/products/Storage-Class-Memory/HybriDIMM/default.aspx
Ash Charles,
Thanks for pointing that out. It’s a cool technology!
I wrote a white paper for Netlist about it in August. You can find it here:
http://Objective-Analysis.com/uploads/2016-08-15_Objective_Anlaysis_Tech_Brief_for_Netlist.pdf
Jim
Question: Nantero is discussed as the holy grail of speed\technology, as well as easy to manufacture without changes needed to existing Fabs and their technology. That being said, what is the holdup on getting this to a commercially available product? Or, am I falling for false marketing again? Thanks for you input and knowledge in general on memory technologies!!!
Mark, Thanks for the compliment! Sorry I was so slow to reply.
All new technologies look far easier to put into production than they turn out to be. Even 3D NAND, which doesn’t use any new materials, is already three years behind its original schedule, and that’s why the NAND market’s in an undersupply right now.
Add a new material and things just get harder.
I really like the Nantero technology, but many other technologies have held the same promise yet have proven challenging to bring to market. I would expect for any technology that requires a new material to take longer to put into production than its creators ever anticipated.