Since I am the Memory Guy I hate learning that I missed something new and cool in the world of memories, but somehow I was unaware of last week’s Memsys conference in Washington DC until a participant notified me on Saturday that his paper: “Reverse Engineering of DRAMs: Row Hammer with Crosshair,” had been given the the best paper award.
Upon looking at the Memsys website it looks like a very intriguing academic conference. about sixty papers were presented in eight interesting sessions:
- Issues in High Performance Computing
- Nonvolatile Main Memories and DRAM Caches, Parts I & II
- Hybrid Memory Cube and Alternative DRAM Channels
- Thinking Outside the Box
- Improving the DRAM Device Architecture
- Issues and Interconnects for 2.5D and 3D Packaging
- Some Amazingly Cool Physical Experiments
in addition to a few apparently-fascinating keynotes.
Fortunately, all of the papers are Continue reading “Memsys: A New Memory Conference”
In a November 25 press release Samsung introduced a 128GB DDR4 DIMM. This is eight times the density of the largest broadly-available DIMM and rivals the full capacity of mainstream SSDs.
Naturally, the first question is: “How do they do that?”
To get all the chips into the DIMM format Samsung uses TSV interconnects on the DRAMs. The module’s 36 DRAM packages each contain four 8Gb (1GB) chips, resulting in 144 DRAM chips squeezed into a standard DIMM format. Each package also includes a data buffer chip, making the stack very closely resemble either the High-Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).
Since these 36 packages (or worse, 144 DRAM chips) would overload the processor’s address bus, the DIMM uses an RDIMM protocol – the address and control pins are buffered on the DIMM before they reach the DRAM chips, cutting the processor bus loading by an order of magnitude or more. RDIMMs are supported by certain server platforms.
The Memory Guy asked Samsung whether Continue reading “Samsung’s Colossal 128GB DIMM”
Intel and Micron today announced that the new version of Intel’s Xeon Phi, a highly parallel coprocessor for research applications, will be built using a custom version of Micron’s Hybrid Memory Cube, or HMC.
This is only the second announced application for this new memory product – the first was a Fujitsu supercomputer back in November.
For those who, like me, were unfamiliar with the Xeon Phi, it’s a module that uses high core-count processors for problems that can be solved with high degrees of parallelism. My friend and processor guru Nathan Brookwood tells me Continue reading “Intel to Use Micron Hybrid Memory Cube”