Using ECC to Reduce Power

CMU Most DRAM Refreshes UnnecessaryA couple of papers at last week’s ISSCC (the IEEE International Solid-State Circuits Conference) caught The Memory Guy’s attention.  Both SK hynix and Samsung showed low-power DRAM designs in which the refresh rate of the DRAM was reduced in order to cut power consumption, with ECC applied to correct the resulting bit errors.

Although I had not heard of this approach before, I have recently learned that researchers at Carnegie Mellon University and my alma mater Georgia Tech presented the idea in a paper delivered at another IEEE conference in 2015: The International Conference on Dependable Systems and Networks.

Here’s the basic concept: DRAM consumes most of its power performing refresh cycles, the issue for which it was given the “Dynamic” part of its name: Dynamic Random-Access Memory.  This use of the word “Dynamic” is a euphemism.  In reality the bits are constantly decaying, but that doesn’t sound as nice.

When the technology was developed in the early 1970s DRAM manufacturers offered to Continue reading “Using ECC to Reduce Power”

Is Intel Adding Yet Another Memory Layer?

Where the Piecemakers DRAM FitsAt the International Solid State Circuits Conference (ISSCC) last week a new “Last Level Cache” was introduced by a DRAM company called “Piecemakers Technology,” along with Taiwan’s ITRI, and Intel.

The chip was designed with a focus on latency, rather than bandwidth.  This is unusual for a DRAM.

Presenter Tah-Kang Joseph Ting explained that, although successive generations of DDR interfaces has increased DRAM sequential bandwidth by a couple of orders of magnitude, latency has been stuck at 30ns, and it hasn’t improved with the WideIO interface or the new TSV-based High Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).  Furthermore, there’s a much larger latency gap between the processor’s internal Level 3 cache and the system DRAM than there is between any adjacent cache levels.  The researchers decided to design a product to fill this gap.

Many readers may be familiar with my bandwidth vs. cost chart that the Memory Guy has used to introduce SSDs and 3D XPoint memory.  The gap that needs filling is Continue reading “Is Intel Adding Yet Another Memory Layer?”

64-Layer 3D NAND Chips Revealed at ISSCC

Toshiba-WD 64-Layer 3D NAND at ISSCC17This week both the Toshiba-Western Digital team and Samsung disclosed details of their 64-layer 3D NAND designs at the IEEE’s International Solid-State Circuits Conference (ISSCC)The Memory Guy thought that it would be interesting to compare these two companies’ 64-layer chips against each other and against the one that Micron presented at last year’s ISSCC.

Allow me to point out that it’s no easy feat to get to 64 layers.  Not only must the process build all 64 layers (or actually pairs of layers plus some additional ones for control) across the entire 300mm wafer with high uniformity and no defects, but then holes must be etched through varying materials from the top to the bottom with absolutely parallel sides at aspect ratios of about 60:1, that is, the hole is 60 times as deep as it is wide.  After this the fab must deposit uniform layers of material onto the sides of these skinny holes without any variation in thickness.

None of these processes have ever been used to build any other semiconductor — it’s all brand new.  This is what makes 3D NAND so challenging, and it’s why the technology is already 3 years behind its original schedule.

It’s not easy to tell from the conference papers whether or not Continue reading “64-Layer 3D NAND Chips Revealed at ISSCC”