White Paper: The Future of Low-Latency Memory

Chart showing areas in the capacity-bandwidth space where DDR4, DDR5, HBM2E and OMI fitTom Coughlin and I have just published a new white paper that is now available on the Objective Analysis website.  It examines the way that  processors communicate with DRAM, and how problems that stem from loading get in the way of increasing speed.

We compare DDR against HBM (High Bandwidth Memory) and a newer standard called OMI, the Open Memory Interface, that is used by IBM’s POWER processors.  DDR suffers from loading issues that limit its speed.  HBM provides significantly greater speed, but has capacity limitations.  OMI provides near-HBM speeds while supporting larger capacities than DDR.   This post’s graphic, taken from the white paper, depicts that visually.  Click on it to see a larger rendition.

OMI supports high memory bandwidths by moving the DRAM’s I/O drivers out of the processor chip and onto the DIMM.  The processor communicates with the DIMM via a serial bus that is based on the same PHY as is used for PCIe, but with a latency-optimized protocol.  As a result, the area consumed for each DRAM channel takes up a much smaller space on the processor chip.  This results in a minor latency penalty, but this can be more than offset by the significantly larger memory capacities that OMI offers.

The 7-page white paper can be downloaded for free from the White Papers page of the Objective Analysis website.  The Memory Guy finds that  it’s a quick and compelling read, even if he does say so himself!

Those of you who will be “Attending” SNIA’s virtual Persistent Memory + Computational Storage Summit (formerly just the Persistent Memory Summit) can hear Tom and me briefly discuss OMI during our presentation on emerging persistent memory technologies: “Dynamic Trends in Nonvolatile Memory Technologies.”