A couple of papers at last week’s ISSCC (the IEEE International Solid-State Circuits Conference) caught The Memory Guy’s attention. Both SK hynix and Samsung showed low-power DRAM designs in which the refresh rate of the DRAM was reduced in order to cut power consumption, with ECC applied to correct the resulting bit errors.
Although I had not heard of this approach before, I have recently learned that researchers at Carnegie Mellon University and my alma mater Georgia Tech presented the idea in a paper delivered at another IEEE conference in 2015: The International Conference on Dependable Systems and Networks.
Here’s the basic concept: DRAM consumes most of its power performing refresh cycles, the issue for which it was given the “Dynamic” part of its name: Dynamic Random-Access Memory. This use of the word “Dynamic” is a euphemism. In reality the bits are constantly decaying, but that doesn’t sound as nice.
When the technology was developed in the early 1970s DRAM manufacturers offered to Continue reading
At the International Solid State Circuits Conference (ISSCC) last week a new “Last Level Cache” was introduced by a DRAM company called “Piecemakers Technology,” along with Taiwan’s ITRI, and Intel.
The chip was designed with a focus on latency, rather than bandwidth. This is unusual for a DRAM.
Presenter Tah-Kang Joseph Ting explained that, although successive generations of DDR interfaces has increased DRAM sequential bandwidth by a couple of orders of magnitude, latency has been stuck at 30ns, and it hasn’t improved with the WideIO interface or the new TSV-based High Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC). Furthermore, there’s a much larger latency gap between the processor’s internal Level 3 cache and the system DRAM than there is between any adjacent cache levels. The researchers decided to design a product to fill this gap.
On its way out the door the Obama Administration put together a proposed response to China’s plans to invest $150 billion in the semiconductor market over the next five years. It seems that US semiconductor industry views China’s investment as a threat to its position in the market.
Last week the President’s Council of Advisors on Science and Technology (PCAST) delivered a 25-page Report to the President entitled: “Ensuring Long-Term U.S. Leadership in Semiconductors.”
You might ask: “Who is PCAST?” The organization states its mission in this paragraph: “The President’s Council of Advisors on Science and Technology (PCAST) is an advisory group of the Nation’s leading scientists and engineers, appointed by the President to augment the science and technology advice available to him from inside the White House and from cabinet departments and other Federal agencies. PCAST is consulted about, and often makes policy recommendations concerning, the full range of issues where understandings from the domains of science, technology, and innovation bear potentially on the policy choices before the President.”
PCAST has a small Semiconductors Working Group whose elite members include Continue reading
Since I am the Memory Guy I hate learning that I missed something new and cool in the world of memories, but somehow I was unaware of last week’s Memsys conference in Washington DC until a participant notified me on Saturday that his paper: “Reverse Engineering of DRAMs: Row Hammer with Crosshair,” had been given the the best paper award.
Upon looking at the Memsys website it looks like a very intriguing academic conference. about sixty papers were presented in eight interesting sessions:
- Issues in High Performance Computing
- Nonvolatile Main Memories and DRAM Caches, Parts I & II
- Hybrid Memory Cube and Alternative DRAM Channels
- Thinking Outside the Box
- Improving the DRAM Device Architecture
- Issues and Interconnects for 2.5D and 3D Packaging
- Some Amazingly Cool Physical Experiments
in addition to a few apparently-fascinating keynotes.
Fortunately, all of the papers are Continue reading
The Memory Guy has been getting calls lately asking how to tell that a shortage is developing. My answer is always the same: It’s hard to tell.
One indicator is that spot prices which were below contract prices rise above contract prices. This doesn’t happen for all components or densities of DRAM or NAND flash at the same time. Some of these transitions are temporary as well. It takes patience to see if it was a momentary change or if it was the onset of a shortage.
DRAM spot prices have generally been below contract prices since August 2014, but this month they raised above contract prices. NAND flash spot prices also fell below contract prices in mid-2014 but today NAND’s spot price remains lower than contract prices.
Lead times represent another indicator. If the lead time for a number of components increases then those chips are moving into a shortage. Lead times have recently been rising for both NAND flash and DRAM.
A third indication occurs when suppliers start to Continue reading
According to a Business Korea article Samsung announced, during a June 14 investor event, plans to reduce its DRAM capital spending and shift its focus to 3D NAND.
The Memory Guy sees this as an unsurprising move. This post’s chart is an estimate of DRAM wafer production from 1991 through 2014. There is a definite downtrend over the past few years. The peak was reached in 2008 at an annual production of slightly below 15 million wafers, with a subsequent dip in 2009 thanks to the global financial collapse at the end of 2008. After a slight recovery in 2010 the industry entered a period of steady decline.
The industry already has more than enough DRAM wafer capacity for the foreseeable future.
Why is this happening? The answer is relatively simple: the gigabytes per wafer on a DRAM wafer are growing faster than the market’s demand for gigabytes.
Let’s dive into that in more detail. The number of gigabytes on a DRAM wafer increases according Continue reading
For almost two years there has been a lot of worry about DRAM spot prices. This post’s graphic plots the lowest weekly spot price per gigabyte for the cheapest DRAM, regardless of density, on a semi-logarithmic scale. (Remember that on a semi-logarithmic scale constant growth appears as a straight line.)
The downward-sloping red line on right side of the chart shows that DRAM prices have been sliding at a 45% annual rate since October 2014. This has a lot of people worried for the health of the industry.
What most fail to remember, though, is that DRAM spot prices hit their lowest point twice in 2011, at $2.40 in August, and then $2.20 in November. Today’s lowest DRAM spot prices have only recently dipped below the $2.52 point hit in October of 2014.
The black dotted line in the chart is intended to focus readers’ attention on DRAM costs, which decrease at a 30% average Continue reading
One of those nasty little secrets about DRAM is that bits may get corrupted by simply reading the bits in a different part of the chip. This has been given the name “Row Hammer” (or Rowhammer) because repeated accesses to a single one of the DRAM’s internal “rows” of bits can bleed charge off of the adjacent rows, causing bits to flip. These repeated accesses are referred to as “hammering”.
Although this was once thought to be an issue only with DDR3 DRAMs, recent papers (listed on the DDR Detective) show that DDR4 also suffers from Row Hammer issues, even though DRAM makers took pains to prevent it.
One big champion of this phenomenon is Barbara Aichinger (pictured) of FuturePlus Systems, a test equipment maker that specializes in detecting row hammer issues. The Memory Guy has had the pleasure of talking with her about this issue and learning first-hand the kind of difficulties it creates.
How does Row Hammer work? It stems from the fact Continue reading
The post in the Hackaday blog, written by Al Williams, covers drum memories, the Williams Tube and its competitor the Selectron (both briefly discussed in my earlier 3D XPoint post), mercury delay lines, dekatrons, core memory (the original Storage Class Memory), plated wire memory, twistor memory, thin-film memory, and bubble memory.
It also links to interesting videos about these devices.
Think of this as a companion piece to the EE Times memory history slideshow I covered in an earlier post. It’s a fun and educational read!
Naturally, the first question is: “How do they do that?”
To get all the chips into the DIMM format Samsung uses TSV interconnects on the DRAMs. The module’s 36 DRAM packages each contain four 8Gb (1GB) chips, resulting in 144 DRAM chips squeezed into a standard DIMM format. Each package also includes a data buffer chip, making the stack very closely resemble either the High-Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).
Since these 36 packages (or worse, 144 DRAM chips) would overload the processor’s address bus, the DIMM uses an RDIMM protocol – the address and control pins are buffered on the DIMM before they reach the DRAM chips, cutting the processor bus loading by an order of magnitude or more. RDIMMs are supported by certain server platforms.