The Memory Guy recently encountered some stories in the press about “UltraRAM” which is the name for a new type of NVRAM developed by researchers at Lancaster University in the UK. These researchers published one paper last June in Nature: Room-temperature Operation of Low-voltage, Non-volatile, Compound-semiconductor Memory Cells, and another just this month in the IEEE’s Transactions on Electron Devices: Simulations of Ultralow-Power Nonvolatile Cells for Random-Access Memory.
According to the papers, the new memory exploits the quantum properties of a triple-barrier Resonant Tunneling (RT) structure to produce a nonvolatile memory that can be either read or written with low-voltages. This RT structure, illustrated in this post’s graphic, is produced by using elements from the 3rd and 5th columns in the periodic table, a combination called “III-V”, that is common in optical devices like LEDs and high-efficiency solar cells, and is less commonly used in microwave electronics. Gallium Arsenide (GaAs) is one of the most common III-V semiconductors. III-V wafers are known for their high manufacturing cost.The Lancaster researchers assert that: “Due to the large (2.1 eV) barrier, the intrinsic (thermal excitation) electron storage time of our InAs/AlSb system was predicted to exceed substantially the age of the Universe.” That’s certainly a noteworthy claim!
I looked through these two the articles to see what the papers propose. It seems that the notion of “triple-barrier resonant tunneling” is not new, but that it has not previously been applied to memory chips. Academia’s growing understanding of materials science has provided a new way to move electrons onto and off of a floating gate or a charge trap that is better than the methods currently used in flash memory. Readers may recall that flash memory’s wear stems from the high voltages necessary to induce Fowler-Nordheim Tunneling or Hot Electron Injection, the two mechanisms that flash uses to force electrons through the tunnel dielectric. If those high voltages can be avoided, then wear to the tunnel dielectric can be avoided as well. If there is no wear the data is unlikely to leak off the floating gate, thereby improving data retention.
Not only can the Lancaster team’s approach eliminate flash wear, but it also promises to allow DRAMs to become nonvolatile, thereby eliminating two energy-wasting characteristics that afflict DRAM technology: Refresh and Destructive Read.
But the new technology suffers from the same issue that a lot of emerging memories do – it uses chemical elements that are not already found in a standard CMOS process, so the transition from DRAM’s standard CMOS to something with III-V will motivate DRAM makers to put off using it as long as possible. If DRAM makers eventually adopt this technology, though, it might allow them to move beyond the inevitable DRAM scaling limit, whenever that limit is finally encountered. Although many industry insiders have been predicting for the past decade or longer that DRAM would soon encounter its scaling limit , the limit has progressively been pushed out thanks to some incredibly brilliant innovations.
The Lancaster team’s technology is less likely to replace NAND flash, which already encountered its scaling limit at 15nm, after which the industry adopted 3D as a work-around. Had the RT approach been devised 15 years ago, before NAND manufacturers had embraced the 3D NAND concept, it could have allowed planar NAND to move past its 15nm scaling limit. But since the NAND market has already migrated to 3D, it is unlikely that the RT cell can be used to improve NAND flash for two simple reasons:
- The layers in the new device are grown epitaxially (i.e. as crystals), whereas 3D NAND is manufactured using amorphous (non-crystalline) layers
- The new technology uses multiple layers to construct the triple barrier, as is represented by the three thin layers in the middle of the diagram. A 3D NAND column’s diameter is determined by the number of layers, so increasing the layer count would cause the column diameter to increase, and this would either increase the die size or reduce the number of columns (and bits) that would fit into the memory array. Either of these would increase cost per bit, preventing the new technology from meeting 3D NAND’s cost structure.
How does this technology compare against other emerging technologies? Well, this is something that is easy for me to evaluate since I recently co-authored a report on emerging memories with Tom Coughlin. This report details eight fundamental technology groups, most of which have multiple sub-types, rendering over twenty new memories all competing for the same market. The Lancaster University technology has the advantage that it doesn’t need a selector as do most other new technologies, since the cell structure is inherently based on a transistor. Selectors are very hard to get right as fellow blogger Ron Neale continually reminds us. In addition, all of these emerging technologies have found that it’s extremely difficult to displace currently-entrenched technologies, rendering it extraordinarily unrewarding for a new technology to compete against any of them.
Cost is the single most important criterion system designers use when choosing a memory type, and that makes it the biggest challenge facing any new memory technology. Technologies that are already in high-volume production have been fine-tuned for low cost, since high production volumes guarantee that costs will be optimized. Low cost guarantees large demand, driving high production volumes. It’s a chicken-and-egg issue.
So, although this new technology has some compelling technical advantages, its success depends more on its production volume than on any other criterion.
Readers who want to know more about emerging memories and their success factors are welcome to contact Objective Analysis to explore a working relationship.