Intel has recently announced a technology that the company calls PowerVia that could inadvertently help reduce the cost of HBM – high-bandwidth memory.
HBM is a stack of up to twelve DRAM chips that are interconnected using over one thousand TSVs – Through-Silicon Vias. These are metal-filled holes etched right through the DRAM die to allow signals to move vertically through the chip. It’s an alternative to more conventional wire bonding.
HBM sells for significantly more than Continue reading “Could Intel’s PowerVia Lower HBM Costs?”
Tom Coughlin and I have just published a new white paper that is now available on the Objective Analysis website. It examines the way that processors communicate with DRAM, and how problems that stem from loading get in the way of increasing speed.
We compare DDR against HBM (High Bandwidth Memory) and a newer Continue reading “White Paper: The Future of Low-Latency Memory”
For some time two sides of the computing community have been at odds. One side aims to add layers to the memory/storage hierarchy while other side is trying to halt this growth.
This has been embodied by recent attempts to stop using objective nomenclature for cache layers (L1, L2, L3) and moving to more subjective names that aim to limit any attempt to add another new layer.
This is a matter close to my heart, since Continue reading “Putting the Brakes on Added Memory Layers”
About a year ago a rumor was circulating that Samsung was unable to yield its sub-20nm products without using EUV for the finer processes. Since The Memory Guy doesn’t traffic in rumors I did not publish anything about this rumor at the time.
On March 25 the company verified the rumor, though, by issuing a statement that: “Samsung is the first to adopt EUV in DRAM production.” I found it interesting that the company turned something that was Continue reading “Samsung Admits to Needing EUV for Sub-20nm Nodes”
On January 22 Processor-In-Memory (PIM) maker UPMEM announced what the company claims are: “The first silicon-based PIM benchmarks.” These benchmarks indicate that a Xeon server that has been equipped with UPMEM’s PIM DIMM can perform eleven times as many five-word string searches through 128GB of DRAM in a given amount of time as the Xeon processor can perform on its own. The company tells us that this provides significant energy savings: the server consumes only one sixth the energy of a standard system. By using algorithms that have been optimized for parallel processing UPMEM claims to be able to process these searches up to 35 times as quickly as a conventional system.
Furthermore, the same system with an UPMEM PIM is said to Continue reading “UPMEM Releases Processor-In-Memory Benchmark Results”
At CES last week Micron Technology introduced a new DRAM. The company’s second 1Znm production part, this 16Gb chip is an early supporter of the new DDR5 interface, opening the door to higher speeds and lower power consumption.
The company’s first 1Znm DRAM is the LPDDR4 part pictured on the left, which started Continue reading “Micron Debuts 16Gb 1Znm DDR5 DRAM Chip”
Everyone knows that DRAM prices have been in a collapse since early this year, but last week DRAM prices hit a historic low point on the spot market. Based on data the Memory Guy collected from spot-price source InSpectrum, the lowest spot price per gigabyte for branded DRAM reached $2.59 last week. This is lower than the prior low of $2.62 last July, which equaled an earlier $2.62 record set in June, 2016. See the figure Continue reading “DRAM Prices Hit Historic Low”
This week’s HotChips conference featured a concept called “Processing in Memory” (PIM) that has been around for a long time but that hasn’t yet found its way into mainstream computing. One presenter said that his firm, a French company called UPMEM, hopes to change that.
What is PIM all about? It’s an approach to improving processing speed by taking advantage of the extraordinary amount of bandwidth available within any memory chip.
The arrays inside a memory chip are pretty square: A word line selects a large number of bits (tens or hundreds of thousands) which all become active at once, each on its own bit line. Then these myriad bits slowly take turns getting onto the I/O pins.
High-Bandwidth Memory (HBM) and the Hybrid Memory Cube (HMC) try to get past this bottleneck by stacking special DRAM chips and running Continue reading “UPMEM Processor-in-Memory at HotChips Conference”
The Memory Guy recently received a question asking where to find Gordon Moore’s famous paper on Moore’s Law. It seems that Moore’s seminal 1965 article is not very easy to find on the web.
I did a little digging myself and found a copy for ready download. It’s still good reading. The Computer History Museum gives access to the original 1965 article. This page also features a follow-up article written ten years later in 1975, and a 1995 thirty-year review of the phenomenon.
All are worth reading.
Back in 2010 I was able to attend the International Solid State Circuits Conference (ISSCC) in which Moore presented a keynote speech that looked back from an even more distant perspective. A little digging found this presentation on The Engineering and Technology History Wiki in the form of a script and downloadable slides. The presentation is titled “No Exponential is Forever“. Although I know that Continue reading “Gordon Moore’s Original 1965 Article”
With Intel’s Cascade Lake rollout last month came with a co-introduction of 3D XPoint Memory in a DIMM form factor, the Optane DIMM that had been promised since the first introduction of 3D XPoint Memory in mid-2015. A lot of benchmarks were provided to make the case for using Optane DIMMs (formally known as the Intel Optane DC Persistent Memory), but not much was said about the pricing, except for assertions that significant savings were possible when Optane was used to replace some of the DRAM in a large computing system.
So… How much does it cost? Well certain technical reports in resources like Anandtech probed sales channels to see what they could find, but The Memory Guy learned that the presentations Intel made to the press in advance of the Cascade Lake rollout contained not only prices for the three Optane DIMM densities (128, 256, & 512GB), but also provided the prices of the DRAM DIMMs that they were being compared against. I’ll get to that in a moment, but first let’s wade through the fundamentals of Intel’s Optane pricing strategy to understand why Intel has needs to price it the way that it has.
In Objective Analysis’ report on 3D XPoint Memory, and in several presentations I have Continue reading “Intel’s Optane DIMM Price Model”