IBM Jumps on the “New Memory” Bandwagon

IBM's 3-Bit PCM Read AlgorithmAt a technical conference hosted by the IEEE this week IBM announced the results of nearly a decade of research in which its scientists have been investigating the emerging technology known as “Phase Change Memory” (PCM).  The scientists presented a means of successfully storing three bits per cell for the first time, while also addressing all of PCM’s challenging idiosyncrasies, including resistance drift and temperature drift.

Commonly referred to by the erroneous nickname “TLC” for Triple Level Cell, this technology squeezes three bits of data into the space of a single bit, essentially cutting the cost per gigabyte to about one third of that of a standard memory chip making it closer in cost to flash.

With this step IBM expects to help drive a new memory layer into existence, one that will fit between the cheap and slow NAND flash used in SSDs and the fast but expensive DRAM used for main memory.  Such a layer would improve the cost/performance of all types of Continue reading

Putting DRAM Prices in Perspective

DRAM Low Spot Pricing 2011-2016For almost two years there has been a lot of worry about DRAM spot prices.  This post’s graphic plots the lowest weekly spot price per gigabyte for the cheapest DRAM, regardless of density, on a semi-logarithmic scale.  (Remember that on a semi-logarithmic scale constant growth appears as a straight line.)

The downward-sloping red line on right side of the chart shows that DRAM prices have been sliding at a 45% annual rate since October 2014.  This has a lot of people worried for the health of the industry.

What most fail to remember, though, is that DRAM spot prices hit their lowest point twice in 2011, at $2.40 in August, and then $2.20 in November.  Today’s lowest DRAM spot prices have only recently dipped below the $2.52 point hit in October of 2014.

The black dotted line in the chart is intended to focus readers’ attention on DRAM costs, which decrease at a 30% average Continue reading

What is DRAM “Row Hammer”?

Barbara Aichinger, FuturePlus SystemsOne of those nasty little secrets about DRAM is that bits may get corrupted by simply reading the bits in a different part of the chip.  This has been given the name “Row Hammer” (or Rowhammer) because repeated accesses to a single one of the DRAM’s internal “rows” of bits can bleed charge off of the adjacent rows, causing bits to flip.  These repeated accesses are referred to as “hammering”.

Although this was once thought to be an issue only with DDR3 DRAMs, recent papers (listed on the DDR Detective) show that DDR4 also suffers from Row Hammer issues, even though DRAM makers took pains to prevent it.

One big champion of this phenomenon is Barbara Aichinger (pictured) of FuturePlus Systems, a test equipment maker that specializes in detecting row hammer issues.  The Memory Guy has had the pleasure of talking with her about this issue and learning first-hand the kind of difficulties it creates.

How does Row Hammer work?  It stems from the fact Continue reading

XMC Breaks Ground for 3D NAND Fab

2015 XMC campus

China foundry XMC has broken ground for its new 3D NAND flash fab, the country’s first China-owned 3D NAND flash facility.  Plans for this fab were publicly disclosed over a year ago.  Simon Yang, XMC’s CEO, gave a presentation at SEMI’s Industry Strategy Symposium (ISS) on January 11, 2015 in which he detailed the need for China to produce a larger proportion of its overall chips, explaining how his company would help make that happen.

Yang used the map in this post’s graphic to show that XMC has enough land on its campus for six 300mm wafer fabs.  Two shells (yellow), each capable of processing 30,000 wafers per month, had been constructed by that time: Fab A (left) was already fully utilized, and Fab B (right) was ready for tooling.  The gray boxes show that the site has enough space to build 2 additional 2-line megafabs, each with a capacity of up to 100k wafers per month.  Accoding to DRAMeXchange XMC currently produces 20,000 wafers of NOR flash per month.  A March 30 China Daily article reports that monthly wafer production will reach 300,000 in 2020 and 1 million in 2030.

XMC’s formal name is Wuhan Xinxin Semiconductor Manufacturing, and it is located Continue reading

Toshiba Restructuring: New 3D Fab Coming

Toshiba Yokkaichi Fab ComplexBeleaguered Toshiba finally unveiled its restructuring plan on Friday.  The plan aims to return the company to profitability and growth through management accountability.

A lot of the presentation focused on the memory business, a shining star of the Toshiba conglomerate, which has so far included appliances, nuclear power plants, and medical electronics.

Toshiba has big plans for its Semiconductor & Storage Products Company, calling it “A pillar of income with Memories as a core business”.  The company plans to enhance its NAND flash cost competitiveness by accelerating development of BiCS (Toshiba’s 3D NAND technology) and by expanding its SSD business.   There are three parts to this effort:

  1. Grow 3D NAND production capacity
  2. Speed up 3D NAND development
  3. Increase SSD development resources

This post’s graphic is an Continue reading

Goodbye, Andy Grove

Andy Grove - Only the Paranoid SurviveIt was sad to hear today of the passing of Andy Grove, Intel co-founder and former president.

Although I did not know him well, Andy was a part of my brief 1½-year stint at Intel in the early 1980s.  He played a key role in my “IOPEC” new employee training, and he and I were in cubicles on the same floor of the same Intel office building, so we would run into each other from time to time during the business day.

Plenty has been said about this man’s competence as a manager, and plenty more will be said.  He drove the creation of the world’s leading semiconductor manufacturer.

I think I was most impressed, though, when he agreed to be interviewed for a PBS television special on the history of the semiconductor industry: “Silicon Valley: American Experience” despite the fact that his battle with Parkinson’s Disease had already rendered it difficult for him to speak.

I always meant to write to him to tell him how impressed I was that he would do that.  I guess I won’t have the chance now.

Early Computer Memories

Ryszard Milewicz' photo of Core MemoryMy colleague Lane Mason found an interesting history of memories blog post that answers the question: ” What did early computers use for fast read/write storage?”

The post in the Hackaday blog, written by Al Williams, covers drum memories, the Williams Tube and its competitor the Selectron (both briefly discussed in my earlier 3D XPoint post), mercury delay lines, dekatrons, core memory (the original Storage Class Memory), plated wire memory, twistor memory, thin-film memory, and bubble memory.

It also links to interesting videos about these devices.

Think of this as a companion piece to the EE Times memory history slideshow I covered in an earlier post.  It’s a fun and educational read!

A 1T SRAM? Sounds Too Good to be True!

Zeno 1T SRAMAt the IEEE’s International Electron Device Meeting (IEDM) in December a start-up named Zeno Semiconductors introduced a 1-transistor (1T) SRAM.  Given that today’s SRAMs generally use between six and eight transistors per bit, this alternative promises to squeeze the same amount of SRAM into a space 1/6th to 1/8th the size of current SRAM designs, leading to significant cost savings.

The device is really a single standard NMOS transistor that behaves as if it were two bipolar transistors connected into something like a flip-flop, although the transistors’ bases are open, rather than cross-coupled to the opposite transistors’ collector, as is done in a standard flip-flop.

The cell is selected by activating the gate, and the bit is set or sensed via the source and drain to provide a differential signal.

This is a decidedly clever departure from standard SRAM configurations, and it reflects a careful observation of the actual Continue reading

Crossbar or Crosspoint?

Computing Crossbar SwitchThe Memory Guy has recently run across a point of confusion between two very similar terms: Crossbar and Crosspoint.

A crosspoint memory is a memory where a bit cell resides at every intersection of a wordline and a bitline.  It’s the smallest way you can make a memory cell.  Think of the wordlines and bitlines as the wires in a window screen.  If there’s a bit everywhere they cross, then it’s a crosspoint memory.

In most cases a crossbar is a communication path in a computing system.  (Of course, there are exceptions, the main one being a company, Crossbar Inc., that is developing a crosspoint memory technology!) A crossbar communication path is topographically similar to a crosspoint, but its function is to connect a number of memory arrays to a number of processors.  Visualize a vertical column of memory arrays named A, B, C… and a horizontal row of processors named 1, 2, 3… as is illustrated in this post’s graphic.  The crossbar can connect Processor 1 to Memory A, or to any other memory that is not already connected to another processor.  These connections are represented by the circles in the diagram.  You can see that this is an efficient way to allow processors to share a memory space to achieve very high speed inter-processor communications.

Crossbars are quite likely to Continue reading

Samsung’s Colossal 128GB DIMM

Samsung_128GB TSV RDIMMIn a November 25 press release Samsung introduced a 128GB DDR4 DIMM.  This is eight times the density of the largest broadly-available DIMM and rivals the full capacity of mainstream SSDs.

Naturally, the first question is: “How do they do that?”

To get all the chips into the DIMM format Samsung uses TSV interconnects on the DRAMs.  The module’s 36 DRAM packages each contain four 8Gb (1GB) chips, resulting in 144 DRAM chips squeezed into a standard DIMM format.  Each package also includes a data buffer chip, making the stack very closely resemble either the High-Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).

Since these 36 packages (or worse, 144 DRAM chips) would overload the processor’s address bus, the DIMM uses an RDIMM protocol – the address and control pins are buffered on the DIMM before they reach the DRAM chips, cutting the processor bus loading by an order of magnitude or more.  RDIMMs are supported by certain server platforms.

The Memory Guy asked Samsung whether Continue reading

Contact

Jim Handy Objective Analysis Memory Market Research +1 (408) 356-2549 Jim.Handy (at) Objective-Analysis.com

Translate to:

Website Translation GTS Translation