This week’s HotChips conference featured a concept called “Processing in Memory” (PIM) that has been around for a long time but that hasn’t yet found its way into mainstream computing. One presenter said that his firm, a French company called UPMEM, hopes to change that.
What is PIM all about? It’s an approach to improving processing speed by taking advantage of the extraordinary amount of bandwidth available within any memory chip.
The arrays inside a memory chip are pretty square: A word line selects a large number of bits (tens or hundreds of thousands) which all become active at once, each on its own bit line. Then these myriad bits slowly take turns getting onto the I/O pins.
High-Bandwidth Memory (HBM) and the Hybrid Memory Cube (HMC) try to get past this bottleneck by stacking special DRAM chips and running Continue reading “UPMEM Processor-in-Memory at HotChips Conference”
The Memory Guy recently received a question asking where to find Gordon Moore’s famous paper on Moore’s Law. It seems that Moore’s seminal 1965 article is not very easy to find on the web.
I did a little digging myself and found a copy for ready download. It’s still good reading. The Computer History Museum gives access to the original 1965 article. This page also features a follow-up article written ten years later in 1975, and a 1995 thirty-year review of the phenomenon.
All are worth reading.
Back in 2010 I was able to attend the International Solid State Circuits Conference (ISSCC) in which Moore presented a keynote speech that looked back from an even more distant perspective. A little digging found this presentation on The Engineering and Technology History Wiki in the form of a script and downloadable slides. The presentation is titled “No Exponential is Forever“. Although I know that Continue reading “Gordon Moore’s Original 1965 Article”
With Intel’s Cascade Lake rollout last month came with a co-introduction of 3D XPoint Memory in a DIMM form factor, the Optane DIMM that had been promised since the first introduction of 3D XPoint Memory in mid-2015. A lot of benchmarks were provided to make the case for using Optane DIMMs (formally known as the Intel Optane DC Persistent Memory), but not much was said about the pricing, except for assertions that significant savings were possible when Optane was used to replace some of the DRAM in a large computing system.
So… How much does it cost? Well certain technical reports in resources like Anandtech probed sales channels to see what they could find, but The Memory Guy learned that the presentations Intel made to the press in advance of the Cascade Lake rollout contained not only prices for the three Optane DIMM densities (128, 256, & 512GB), but also provided the prices of the DRAM DIMMs that they were being compared against. I’ll get to that in a moment, but first let’s wade through the fundamentals of Intel’s Optane pricing strategy to understand why Intel has needs to price it the way that it has.
In Objective Analysis’ report on 3D XPoint Memory, and in several presentations I have Continue reading “Intel’s Optane DIMM Price Model”
With the release of its Cascade Lake family of processors today (formally called the “2nd Generation Intel Xeon Scalable processor”) Intel disclosed more details about its Optane DIMM, which has been officially named the “Intel Optane DC Persistent Memory.” This DIMM’s architecture is surprisingly similar to an SSD, even to the point of its having error correction and encryption!
The Memory Guy doesn’t generally cover SSDs, but I do cover DIMMs, so this is one of those posts that I could have put into either of my blogs: The Memory Guy or The SSD Guy. I have decided to put it here with the hopes that it will be easier for members of the memory community to find.
The internal error correction, the encryption, and the fact that 3D XPoint Memory wears out and must use wear leveling, all cause the Optane DIMM’s critical timing path to be slower than the critical path in a DRAM DIMM, rendering the Optane DIMM unsuitable for code execution. This, and the fact that XPoint writes are slower than its reads, all help to explain why an Optane DIMM is never used as the only memory in a system: there is always a DRAM alongside the Optane DIMM to provide faster Continue reading “What’s Inside an Optane DIMM?”
In early February the Samsung Strategy & Innovation Center asked for The Memory Guy to present an outlook for semiconductors as a part of the company’s Samsung Forum series.
Samsung kindly posted a video of this presentation on-line for anyone to watch.
Naturally, the presentation is memory-focused since it consists of the Memory Guy presenting to the world’s leading memory chip supplier. Still, it also covers total semiconductor revenues and demand drivers for future non-memory technologies, as well as memory chips.
During the presentation I explained that the next few years will bring semiconductors into new applications while chips will maintain their strength in existing markets. I showed how semiconductor demand doesn’t change much over time, but that the real swing factor in chip revenues is Continue reading “Video: What’s Driving Tomorrow’s Semiconductors?”
Every year the folks at VLSI Research provide The Memory Guy with an opportunity to share the latest Objective Analysis forecast with the world. They record a 20-minute video highlighting the forecast in a conversation between me and VLSI’s chairman, Dan Hutcheson.
There are now twelve videos on the site, one for each year from 2008 to 2019. That’s quite a collection!
Over the course of each video I not only present the forecast, but also give an overview of the thinking behind it. Typically I explain the impact of high or low capital spending in prior years, but in some forecasts I explain how other issues (in particular NAND flash’s excruciating conversion from planar to 3D) can create a shortage independent of capital spending patterns.
We also go over what went right or wrong with the prior year’s forecast. Things that go wrong are generally macroeconomic issues like the Continue reading “Forecast Videos Prove A History of Accuracy”
This week the International Solid State Circuits Conference (ISSCC) was held in San Francisco. What was there? The Memory Guy will tell you!
There were three NAND flash papers, one each from Toshiba, Samsung, and Western Digital Corp. (WDC).
Toshiba described a 96-layer QLC 1.33 terabit chip. Like the chip that Toshiba presented last year, this one uses CUA, which Toshiba calls “Circuit Under Array” although Micron, who originated the technology, says that CUA stands for “CMOS Under Array.” Toshiba improved the margins between the cells by extending the gate threshold ranges below zero, a move that forced them to re-think the sense amplifiers. They also implemented a newer, faster, lower-error way to Continue reading “Memory Sightings at ISSCC”
For more than a year The Memory Guy has been fielding questions about Micron’s QuantX products.
First announced at the 2016 Flash Memory Summit, this brand name has been assigned to Micron SSDs and DIMMs that use the Intel/Micron 3D XPoint Memory. Originally QuantX products were scheduled to ship in 2017, but Micron is currently projecting availability in 2019. My clients wonder why there have been these delays, and why Micron is not more actively marketing this product.
The simple answer is that it doesn’t make financial sense for Micron to ship these products at this time.
Within two weeks of the first announcement of 3D XPoint Memory, at the 2015 Flash Memory Summit, I knew and explained that the technology would take two years or more to reach manufacturing cost parity with DRAM, even though Intel and Micron loudly proclaimed that it was ten times denser than DRAM. This density advantage should eventually allow XPoint manufacturing costs to drop below DRAM costs, but any new technology, and even old technologies that are in low-volume production, suffer a decided scale disadvantage against DRAM, which sells close Continue reading “Where is Micron’s QuantX?”
Ever since moving to Silicon Valley some time ago The Memory Guy has worked with a number of impressively-talented engineers from India. Some are educated in the US, while others are educated in India. One university that produces excellent engineers is the Indian Institute of Technology, or IIT.
It comes as no surprise, then, to find a valuable resource produced by an IIT faculty member. Dr. Sparsh Mittal, an assistant professor at IIT Hyderabad, reached out to me to share some papers that he thought might be of interest to Memory Guy readers. They were a few of roughly 40 papers that he has posted on his publications page. He explained that he previously worked at Oak Ridge National Lab, in the US.
Dr. Sparsh has published several very comprehensive surveys on memory systems, both conventional and emerging, covering topics like DRAM reliability, NVM/Flash, ReRAM-based processing-in-memory, and the architecture of neural networks. The web page lists 34 surveys, eight of them Continue reading “Valuable Memory Technical Resources”
It’s earnings call season, and we have heard of a slowing DRAM market and NAND flash price declines from Micron, SK hynix, Intel, and now Samsung. DRAM prices have stopped increasing, and that can be viewed as a precursor to a price decline.
Samsung’s 31 October, 2018 3Q18 earnings call vindicated Objective Analysis‘ forecast for a 2H18 downturn in memories that will take the rest of the semiconductor market with it.
Those familiar with our forecast know that for a few years we have been predicting a downturn in the second half of this year as NAND flash prices fall, followed by a DRAM price collapse. After the DRAM collapse the rest of the semiconductor market will undergo a downturn.
We’ve been calling for this downturn for some time. Dan Hutcheson at VLSI Research has been videotaping our forecast every December for the past Continue reading “Memory Market Falling, as Predicted”