Samsung has been strongly promoting its “Aquabolt-XL” Processor-In-Memory (PIM) devices for the past year. In this second post of a two-part series The Memory Guy will present other companies’ similar PIM devices, and will discuss the PIM approach’s outlook for commercial success.
Part 1 of this series explains the concept of Processing in Memory (PIM), details Samsung’s Aquabolt-XL design, and shares some performance data. It can be found HERE.
Samsung’s Not the First PIM Maker
This is not at all the first Continue reading “Samsung’s Aquabolt-XL Processor-In-Memory (Part 2)”
This week five more Objective Analysis Briefs have just become available. This handful covers commonly held myths and the basic underpinnings of semiconductor market cycles. All are drawn from the most interesting and timeless of the Insights that we have published on membership website Smartkarma. Now Objective Analysis is providing them to our friends for a reasonable price.
The Brief is a very Continue reading “More New Objective Analysis Briefs Available”
For the past year, since ISSCC in February 2021, Samsung has been strongly promoting its “Aquabolt-XL” Processor-In-Memory (PIM) devices. In this two-part post The Memory Guy will explain the Aquabolt-XL architecture, its performance, other companies’ similar devices, and discuss the PIM approach’s outlook for commercial success.
Processing in memory is not a Continue reading “Samsung’s Aquabolt-XL Processor-In-Memory (Part 1)”
Although Objective Analysis has published its “Brief” format white papers for some time, this line has never received the focus that it deserves. To remedy that, we are taking the most interesting and timeless of the Insights that we have published on membership website Smartkarma and providing them to our friends for a reasonable price.
The Brief is a very short report format used to make a succinct Continue reading “Introducing New Objective Analysis Briefs”
Intel has recently announced a technology that the company calls PowerVia that could inadvertently help reduce the cost of HBM – high-bandwidth memory.
HBM is a stack of up to twelve DRAM chips that are interconnected using over one thousand TSVs – Through-Silicon Vias. These are metal-filled holes etched right through the DRAM die to allow signals to move vertically through the chip. It’s an alternative to more conventional wire bonding.
HBM sells for significantly more than Continue reading “Could Intel’s PowerVia Lower HBM Costs?”
Tom Coughlin and I have just published a new white paper that is now available on the Objective Analysis website. It examines the way that processors communicate with DRAM, and how problems that stem from loading get in the way of increasing speed.
We compare DDR against HBM (High Bandwidth Memory) and a newer Continue reading “White Paper: The Future of Low-Latency Memory”
For some time two sides of the computing community have been at odds. One side aims to add layers to the memory/storage hierarchy while other side is trying to halt this growth.
This has been embodied by recent attempts to stop using objective nomenclature for cache layers (L1, L2, L3) and moving to more subjective names that aim to limit any attempt to add another new layer.
This is a matter close to my heart, since Continue reading “Putting the Brakes on Added Memory Layers”
About a year ago a rumor was circulating that Samsung was unable to yield its sub-20nm products without using EUV for the finer processes. Since The Memory Guy doesn’t traffic in rumors I did not publish anything about this rumor at the time.
On March 25 the company verified the rumor, though, by issuing a statement that: “Samsung is the first to adopt EUV in DRAM production.” I found it interesting that the company turned something that was Continue reading “Samsung Admits to Needing EUV for Sub-20nm Nodes”
On January 22 Processor-In-Memory (PIM) maker UPMEM announced what the company claims are: “The first silicon-based PIM benchmarks.” These benchmarks indicate that a Xeon server that has been equipped with UPMEM’s PIM DIMM can perform eleven times as many five-word string searches through 128GB of DRAM in a given amount of time as the Xeon processor can perform on its own. The company tells us that this provides significant energy savings: the server consumes only one sixth the energy of a standard system. By using algorithms that have been optimized for parallel processing UPMEM claims to be able to process these searches up to 35 times as quickly as a conventional system.
Furthermore, the same system with an UPMEM PIM is said to Continue reading “UPMEM Releases Processor-In-Memory Benchmark Results”
At CES last week Micron Technology introduced a new DRAM. The company’s second 1Znm production part, this 16Gb chip is an early supporter of the new DDR5 interface, opening the door to higher speeds and lower power consumption.
The company’s first 1Znm DRAM is the LPDDR4 part pictured on the left, which started Continue reading “Micron Debuts 16Gb 1Znm DDR5 DRAM Chip”