Memory Sightings at ISSCC

ISSCC LogoThis week the International Solid State Circuits Conference (ISSCC) was held in San Francisco.  What was there?  The Memory Guy will tell you!

NAND Flash

There were three NAND flash papers, one each from Toshiba, Samsung, and Western Digital Corp. (WDC).

Toshiba 96-layer 1Tb QLC NANDToshiba described a 96-layer QLC 1.33 terabit chip.  Like the chip that Toshiba presented last year, this one uses CUA, which Toshiba calls “Circuit Under Array” although Micron, who originated the technology, says that CUA stands for “CMOS Under Array.”  Toshiba improved the margins between the cells by extending the gate threshold ranges below zero, a move that forced them to re-think the sense amplifiers.  They also implemented a newer, faster, lower-error way to program the cells.  The developers made a number of speed-related enhancements, including the option of allowing pages to be configured as SLC, TLC, or QLC.  Toshiba compared this chip to a similar chip later presented by its partner Western Digital, pointing out that this one had a higher density despite its lower layer count (96 vs. 128 layers) largely due to its use of QLC rather than the TLC approach used by the chip that WDC presented.  The WDC chip was designed for speed.

Samsung 512Gb >110-layer TLC NANDSamsung presented a 512Gb TLC part that had “more than 110 layers” but the company’s management forbade the speaker from disclosing the actual layer count.  This is something peculiar to Samsung, who originated the concept of not disclosing process geometries, choosing instead to use the term “20nm-class” to denote its 27nm planar NAND.  Samsung has been working on thinning the chip’s layers to avoid having to adopt the string stacking approach already embraced by its competition.  This might have prevented Samsung from implementing a full 128 layers in this chip, which would explain the company’s reluctance to specify its layer count.  The key focus of this design was to operate at the very high speeds required by the Toggle 4.0 interface, and it did a good job of that, with a 1.2Gb/s I/O bandwidth, 83MB/s program throughput, and a 45 microsecond read time.  This was achieved through a number of optimizations ranging from improved bitline precharge techniques, minimization of wordline capacitive coupling, and new Toggle Mode interface techniques, to even modifying erase sequences to compensate for the fact that the string column’s diameter shrinks from the top of the column to its bottom.  Interestingly, this chip has the least efficient layout of any that The Memory Guy has seen to date, with only 13 million columns per square millimeter, half that of the most dense chips.

Western Digital presented a different chip than its partner Toshiba’s at a 512Gb density using 128 layers and TLC.  The speaker made it a point to repeatedly tell the audience that the design had the highest layer count ever announced, which worked to Samsung’s disadvantage since Samsung chose not to reveal its layer count.  The chip was divided into four planes to double performance, an approach that would have penalized the die area by about 15% had the designers not used the CUA approach mentioned above.  By slipping the logic beneath the array the available area for the logic becomes enormous and the die area penalty for moving to four planes was reduced to less than 1%.  When power planes and the bitlines are also allowed to move under the array the chip can run at higher speeds thanks to reduced capacitance and resistance.  As a result, WDC achieved a 132MB/s program throughput or about 50% more than that of the Samsung chip.  The WDC chip also uses a technique the company calls “multi-chip Peak Power Management” or mPPM to manage power in a multi-die stack and improve write times by as much as 47%, and accesses data using a smaller 4kB page (vs. the  industry standard 16kB) to limit peak currents.

DRAM

Only two DRAM makers presented papers at the conference.  Given that there are only three major DRAM manufacturers, perhaps that shouldn’t be a surprise.

LPDDR5 Eye ChartSamsung’s DRAM presentation was unusual.  Most ISSCC papers detail the approach the designers used to achieve a superior chip design.  The Samsung paper was instead a detailed description of LPDDR5 and how it works, with only five of the presentation’s 36 slides providing any information about the actual chip.

SK hynix presented a more traditional ISSCC paper about its 16Gb DDR5 chip implemented in the company’s 1Ynm process.  The speaker  underscored that this is the first announcement of a DDR5 chip, which is interesting since JEDEC’s website tells us that the DDR5 specification is still in development.  The presentation focused on the high-speed tricks that developers used to attain DDR5’s high bandwidth, including the use of a phase rotator for the phase-locked loop (PLL), an Rx (receiver) equalizer, quarter-rate feed-forward equalization (FFE), and an on-chip bit-error rate tester (BERT).  SK hynix’ chip appears to be a pretty tight layout at 76mm for 16Gb, which is laudatory given all of the elaborate circuitry that the designers needed to use to match DDR5’s 6.4Gb/s/pin speed.

SK hynix wire-bonded DRAM stackSK hynix presented another DRAM solution that had not been fully developed, a status that is outside the norm for an ISSCC paper.  The company presented a high-density DIMM that is an alternative to the high-cost approach that Samsung used to introduce a 128GB DIMM three years ago.  While Samsung’s DIMM is based on stacks of DRAM chips connected to a base logic die using TSVs (through-silicon vias), the SK hynix “Managed DRAM” approach stacks wire-bonded 2133Mbps 16Gb DRAM chips that are managed by a yet-to-be-developed “Media Controller” logic die to assemble an enormous 512GB DIMM.  Bonding wires introduce delays, so the company has developed a special scheme to match delays from the nearest to the farthest chip.  The DRAM die has been laid out for minimum area, a decision that could increase error rates, so the Media Controller uses a robust Reed Solomon error correction (ECC) scheme to repair bit failures of up to 10% of the data.  The ECC slows the DIMM down but the die per wafer can be increased by 26% which the speaker explained will cut DIMM costs in half, bringing this DIMM between the cost/performance of DRAM and that of “Existing SCM,” which we could take to mean 3D XPoint.

SRAM

ISSCC’s two SRAM papers were intermingled with papers that used SRAM cells for neural network processing, which was a major theme for the conference.  Both covered embedded SRAM in foundry processes.  It seems that the conference has seen its last discrete SRAM paper.

The first SRAM paper, by TSMC, explained how researchers designed a 7nm dual-port SRAM that resists disturbs that could be caused by asynchronous accesses from its two ports.  The second, a paper by Samsung, suggested that SRAMs had reached a stage where intelligence had to be added to compensate for increasing sensitivity to temperature and voltage variations.  This situation seems poised to worsen as shrinks continue.  Fortunately, costs are declining for the additional logic that will be required to compensate for this trend.  The company’s two approaches to this, called TATA and VATA, are said to provide 15% and 30% (respectively) speed improvements over conventional designs at the 7nm process used for this presentation’s chip.

Emerging Memories

In stark contrast to December’s IEDM, where more emerging memories were discussed than conventional memory types, the coverage of emerging memories at ISSCC was scant.  This is of particular interest to The Memory Guy, since I recently co-authored a thorough analysis of the emerging memory markets.

Quite interestingly, Intel presented two different alternative memory technologies for the company’s 22nm CMOS logic process, ReRAM and MRAM. The presenter for the MRAM design explained that it has advantages over NOR flash by needing only a small charge pump for wordline boost (as opposed to NOR’s requirement for high voltages at high currents for programming), a lower write error rate, better sensing, and higher reliability than NOR.  Since NOR can’t be produced on a 22nm FinFET process to begin with, it’s hard to understand why the presenter even bothered to list all these other benefits!  The device’s reliability is impressive: No endurance errors were detected after 106 writes, and there were no observed read errors after 1012 reads.  The downside was that MRAM’s stochiastic nature makes it sometimes require multiple writes before the bit finally accepts the command and actually performs the write.  This requires one (or multiple) “Write-Verify-Write” sequences to assure data integrity.  The process slows down MRAM writes and makes their timing non-deterministic.

Intel’s ReRAM paper described a different ReRAM than the PCM used in the company’s 3D XPoint Memory.  In the true spirit of ISSCC the presenter provided great detail about the circuits used to assure correct writes and to verify the programmed bit’s contents.

Texas Instruments gave a presentation of the company’s latest FRAM-based microcontrollers that I was unable to attend.

Neural Networks and AI

I wouldn’t be acknowledging a major theme of the conference if I didn’t mention the numerous AI and Neural Network papers that made it into the show.  This is very much a sign of the times we are in.  Two of the four keynotes that kicked off the conference covered this topic, and the first, a researcher from Facebook, pointed out that this was the third time over the course of his career that interest in the topic had surged.  This leaves open the possibility that, once again, the surge will die off.  If it doesn’t, though, there will certainly be a large number of solutions to choose from!

The main reason that I am including these here, though, is that neural networks are very similar to standard memory chips, and it is highly likely that they will be manufactured by memory suppliers if they gain a following.  Time will tell.

Want to Know More?

One key difference between Objective Analysis and other market research firms is our analysts’ depth and understanding of these arcane subjects.  How will they impact your company?  What must you do to stay ahead?  We can help answer these questions.  Send us an e-mail or call us to see how we can help your company filter through all of this information to plan out your best strategy for success.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.