Something that has really changed over the past few years is the use of an increasing numbers of voltage levels in multi-level cell flash, from SLC, to MLC, to TLC, and then QLC, with the promise of PLC (5 bits per cell) in the foreseeable future. That means each PLC cell must be able to hold 32 voltage levels accurately, and the NAND chip’s control logic must be able to tell one level from the next. This post’s graphic is intended to illustrate the dramatic difference between the four voltage levels required by MLC and PLC’s 32 levels. TLC (eight levels) and SLC (16 levels) are included to round out the picture.
What has changed to make this large number of voltage levels more feasible today than it has been in the past? In this post The Memory Guy will try to answer this question.
Why Wasn’t There Much QLC Planar NAND?
The ability to successfully produce QLC came as an added benefit of the transition from planar NAND to 3D NAND. Although QLC NAND was first introduced way back in 2006 by mSystems, a company acquired later that year by SanDisk (now a part of Western Digital), and although SanDisk and Toshiba (now Kioxia) eventually sold QLC NAND products in 2009, the technology was never successful with planar NAND because it took so long to qualify QLC operation.
Chips were always first introduced with SLC, and then were qualified to go to MLC. A little later the TLC qualification would be done, and then QLC would be the next step. But by the time QLC in one process node was ready for production the next process node was available, and MLC in the new process node always provided a lower cost/gigabyte structure than the QLC of the former process node. This removed any incentive to qualify a part for QLC.
It’s easy to understand why qualification was a tricky problem. With planar NAND the area of the floating gate drops dramatically from one process to the next. For example, when NAND migrated from 55nm to 45nm, the area of the gate shrank by a factor of about 100%-(45²/55²)=33%. By losing 33% of its area it also lost 33% of the electrons that represent a bit or a voltage level. When splitting the electrons in a cell into sixteen different voltage levels that’s a big deal!
How 3D NAND Differs
So why is there so much QLC 3D NAND, and why is there talk of PLC? It’s because in 3D NAND the gates are all about the same size for any number of layers. You don’t have the gate areas being reduced by 33% or more with each new process generation. The gate area is roughly the thickness of the wordline layer times the diameter of the gate (or charge trap) in the vertical column. Neither of these dimensions changes much from one layer count to the next.
It’s always easier to hit a target that’s standing still, and 3D NAND’s gate area is almost standing still, so getting to QLC suddenly became a reasonable goal. If you could make 48-layer NAND work at TLC then it’s not so difficult to get all subsequent layer counts to do the same. If you could do that with 64-layer QLC then all subsequent layer counts were simple. PLC offers the same promise.
Plus, 3D NAND has a much larger gate area to begin with than did the more recent generations of planar NAND. I explained that in a 2018 Memory Guy post.
Controllers Get Credit Too!
Of course, due credit must be given to the increasing sophistication of NAND flash controllers. Note that memory is not the only technology to benefit from Moore’s Law. CMOS logic increases its transistor count too. This means that the same price of controller becomes increasingly sophisticated over time. A portion of this increased sophistication is used to improve error correction capabilities. The controllers that are used with QLC flash include significantly better error correction than those that were introduced with the first MLC flash chips.
What the Future Holds
In another post The Memory Guy explained the diminishing cost returns gained by increasing the number of bits per cell. While there’s no hard limit to the minimum acceptable cost return, there will come a time that the development of one more bit per cell will cost more than can be justified. Nobody wants to spend one million dollars to improve costs by such a small amount that the return is less than one million dollars.
In-chip noise is also an issue when dealing with smaller and smaller fractions of the cell’s readout voltage, and the normal way to deal with noise issues is to slow operations down, allowing signals to settle. This means that both reading the chip and writing to it will become more error prone and slower to the point that the chip will be undesirably slow or the required error correction will become too costly.
That said, it’s impossible to tell when this point will be reached, but my guess is that it will happen very soon. PLC may be the last stop in this progression.
On the other hand, the incredible genius of semiconductor researchers may come up with very innovative means of approaching this problem that take us to six, seven, or even eight bits per cell. Time will tell. This level of genius is one of the things that makes this industry so fascinating!
If you want to gain thorough insights into issues like this that combine technical insight with financial understanding please consider working with Objective Analysis. We specialize in helping bring the future into better focus with our technical, financial, and analytical depth.