Putting the Brakes on Added Memory Layers

Close-up of a part of the blog post's main graphicFor some time two sides of the computing community have been at odds.  One side aims to add layers to the memory/storage hierarchy while other side is trying to halt this growth.

This has been embodied by recent attempts to stop using objective nomenclature for cache layers (L1, L2, L3) and moving to more subjective names that aim to limit any attempt to add another new layer.

This is a matter close to my heart, since Continue reading “Putting the Brakes on Added Memory Layers”

Samsung Admits to Needing EUV for Sub-20nm Nodes

Pretty graphic to draw you in to the blog postAbout a year ago a rumor was circulating that Samsung was unable to yield its sub-20nm products without using EUV for the finer processes.  Since The Memory Guy doesn’t traffic in rumors I did not publish anything about this rumor at the time.

On March 25 the company verified the rumor, though, by issuing a statement that: “Samsung is the first to adopt EUV in DRAM production.”  I found it interesting that the company turned something that  was Continue reading “Samsung Admits to Needing EUV for Sub-20nm Nodes”

UPMEM Releases Processor-In-Memory Benchmark Results

Chip layout of Micron's Automata ProcessorOn January 22 Processor-In-Memory (PIM) maker UPMEM announced what the company claims are: “The first silicon-based PIM benchmarks.”  These benchmarks indicate that a Xeon server that has been equipped with UPMEM’s PIM DIMM can perform eleven times as many five-word string searches through 128GB of DRAM in a given amount of time as the Xeon processor can perform on its own.  The company tells us that this provides significant energy savings: the server consumes only one sixth the energy of a standard system.  By using algorithms that have been optimized for parallel processing UPMEM claims to be able to process these searches up to 35 times as quickly as a conventional system.

Furthermore, the same system with an UPMEM PIM is said to Continue reading “UPMEM Releases Processor-In-Memory Benchmark Results”

Micron Debuts 16Gb 1Znm DDR5 DRAM Chip

Micron 1Znm LP-DDR4 DRAMAt CES last week Micron Technology introduced a new DRAM.  The company’s second 1Znm production part, this 16Gb chip is an early supporter of the new DDR5 interface, opening the door to higher speeds and lower power consumption.

The company’s first 1Znm DRAM is the LPDDR4 part pictured on the left, which started Continue reading “Micron Debuts 16Gb 1Znm DDR5 DRAM Chip”

DRAM Prices Hit Historic Low

Red line showing weekly DRAM prices over the course of 2019, where they have fallen from $6.10 to $2.59Everyone knows that DRAM prices have been in a  collapse since early this year, but last week DRAM prices hit a historic low point on the spot market.  Based on data the Memory Guy collected from spot-price source InSpectrum, the lowest spot price per gigabyte for branded DRAM reached $2.59 last week.  This is lower than the prior low of $2.62 last July, which equaled an earlier $2.62 record set in June, 2016.  See the  figure Continue reading “DRAM Prices Hit Historic Low”

UPMEM Processor-in-Memory at HotChips Conference

UPMEM DIMMs in a ServerThis week’s HotChips conference featured a concept called “Processing in Memory” (PIM) that has been around for a long time but that hasn’t yet found its way into mainstream computing.  One presenter said that his firm, a French company called UPMEM, hopes to change that.

What is PIM all about?  It’s an approach to improving processing speed by taking advantage of the extraordinary amount of bandwidth available within any memory chip.

The arrays inside a memory chip are pretty square: A word line selects a large number of bits (tens or hundreds of thousands) which all become active at once, each on its own bit line.  Then these myriad bits slowly take turns getting onto the I/O pins.

High-Bandwidth Memory (HBM) and the Hybrid Memory Cube (HMC) try to get past this bottleneck by stacking special DRAM chips and running Continue reading “UPMEM Processor-in-Memory at HotChips Conference”

Gordon Moore’s Original 1965 Article

The Memory Guy recently received a question asking where to find Gordon Moore’s famous paper on Moore’s Law.  It seems that Moore’s seminal 1965 article is not very easy to find on the web.

I did a little digging myself and found a copy for ready download.  It’s still good reading.  The Computer History Museum gives access to the original 1965 article.  This page also features a follow-up article written ten years later in 1975, and a 1995 thirty-year review of the  phenomenon.

All are worth reading.

Back in 2010 I was able to attend the International Solid State Circuits Conference (ISSCC) in which Moore presented a keynote speech that looked back from an even more distant perspective.  A little digging found this presentation on The Engineering and Technology History Wiki in the form of a script and downloadable slides.  The presentation is titled “No Exponential is Forever“.  Although I know that Continue reading “Gordon Moore’s Original 1965 Article”

Intel’s Optane DIMM Price Model

With Intel’s Cascade Lake rollout last month came with a co-introduction of 3D XPoint Memory in a DIMM form factor, the Optane DIMM that had been promised since the first introduction of 3D XPoint Memory in mid-2015.  A lot of benchmarks were provided to make the case for using Optane DIMMs (formally known as the Intel Optane DC Persistent Memory), but not much was said about the pricing, except for assertions that significant savings were possible when Optane was used to replace some of the DRAM in a large computing system.

So…  How much does it cost?  Well certain technical reports in resources like Anandtech probed sales channels to see what they could find, but The Memory Guy learned that the presentations Intel made to the press in advance of the Cascade Lake rollout contained not only prices for the three Optane DIMM densities (128, 256, & 512GB), but also provided the prices of the DRAM DIMMs that they were being compared against.  I’ll get to that in a moment, but first let’s wade through the fundamentals of Intel’s Optane pricing strategy to understand why Intel has needs to price it the way that it has.

In Objective Analysis’ report on 3D XPoint Memory, and in several presentations I have Continue reading “Intel’s Optane DIMM Price Model”

What’s Inside an Optane DIMM?

Part of Optane DIMM LogoWith the release of its Cascade Lake family of processors today (formally called the “2nd Generation Intel Xeon Scalable processor”) Intel disclosed more details about its Optane DIMM, which has been officially named the “Intel Optane DC Persistent Memory.”  This DIMM’s architecture is surprisingly similar to an SSD, even to the point of its having error correction and encryption!

The Memory Guy doesn’t generally cover SSDs, but I do cover DIMMs, so this is one of those posts that I could have put into either of my blogs: The Memory Guy or The SSD Guy.  I have decided to put it here with the hopes that it will be easier for members of the memory community to find.

The internal error correction, the encryption, and the fact that 3D XPoint Memory wears out and must use wear leveling, all cause the Optane DIMM’s critical timing path to be slower than the critical path in a DRAM DIMM, rendering the Optane DIMM unsuitable for code execution.  This, and the fact that XPoint writes are slower than its reads, all help to explain why an Optane DIMM is never used as the only memory in a system: there is always a DRAM alongside the Optane DIMM to provide faster Continue reading “What’s Inside an Optane DIMM?”

Video: What’s Driving Tomorrow’s Semiconductors?

Samsung ForumIn early February the Samsung Strategy & Innovation Center asked for The Memory Guy to present an outlook for semiconductors as a part of the company’s Samsung Forum series.

Samsung kindly posted a video of this presentation on-line for anyone to watch.

Naturally, the presentation is memory-focused since it consists of the Memory Guy presenting to the world’s leading memory chip supplier.  Still, it also covers total semiconductor revenues and demand drivers for future non-memory technologies, as well as memory chips.

During the presentation I explained that the next few years will bring semiconductors into new applications while chips will maintain their strength in existing markets. I showed how semiconductor demand doesn’t change much over time, but that the real swing factor in chip revenues is Continue reading “Video: What’s Driving Tomorrow’s Semiconductors?”