Most memory industry participants view emerging memories as the eventual path of the business: There’s no doubt that today’s memory technologies will stop scaling, and that new memory technologies will need to replace today’s leading technologies both in the embedded and stand-alone spaces. This includes DRAM, NAND flash, NOR flash, and SRAM. Because this outlook is held by nearly everyone in the industry, all major memory manufacturers are investing in alternative memory technologies. The leading players are researching multiple technologies at the same time.
Meanwhile, the industry outlook has allowed many university research projects and other similar efforts to gain funding to develop new memory types, spawning a large number of small single-technology companies tightly focused on one technology or another: ReRAM, MRAM, FRAM, and others, including such highly-differentiated technologies as carbon nanotubes and printable polymers.
In our Emerging Memory report Tom Coughlin and I did our Continue reading “Emerging Memories Today: Emerging Memory Companies”
Something that distinguishes the Emerging Memory report that Tom Coughlin and I recently published is the depth in which we cover in the field. This is not measured in pages, but in the topics that we cover. For example, this blog post, excerpted from the report, covers the changes in tooling that will be necessary to allow a standard CMOS wafer fabrication plant (a “fab”) to produce an emerging memory technology, and the impact that this is likely to have on the market for semiconductor tools.
All of the emerging memory technologies covered in the Memory Guy’s previous post share certain things in common. One of them is that they are built between metal layers, rather than in the silicon CMOS substrate itself (with the possible exception of the hafnium oxide FRAM.)
This means that the tooling required for any of these technologies will bear a strong resemblance to that used by any of the others. For the most part these tools will be used for deposition and etch. The lithography requirements will be satisfied by the tools used to pattern the metal layers.
The process flow in this figure sheds some light on the steps that Continue reading “Emerging Memories Today: Process Equipment Requirements”
Here in the US we use an extremely odd expression. If there are multiple varieties of an item we commonly say: “There are more of them than you can shake a stick at!” This is a very lengthy way to say: “numerous.” (I don’t believe that ANYONE understands how that expression became a part of our vernacular!) Although The Memory Guy isn’t normally seen shaking a stick, I find it an apt way to describe the numerous new memory technologies that are being pioneered today. There are certainly lots of them!
This post is intended to be very high-level technical description of today’s leading emerging memory technologies. These are excerpts of the in-depth descriptions that can be found in our recently-released report: Emerging Memories Poised to Explode.
PCM: Also known as PRAM, Phase-Change Memory technology is based upon a material that can be either amorphous or crystalline at normal ambient temperatures. The crystalline state has a low resistance and the amorphous state has a high resistance. This is controlled by melting the bit cell by passing a current though it and then allowing it to cool at different rates.
In chemistry and physics, anything with a Continue reading “Emerging Memories Today: The Technologies: MRAM, ReRAM, PCM/XPoint, FRAM, etc.”
With all the new emerging memories that are being developed there must be quite a number of test runs to study exactly how well these new technologies and materials can perform. If a batch of 300mm wafers must be used for a single test then the cost multiplies, particularly if no other test can be run on that wafer.
Another great difficulty is that most memory manufacturers run their wafers on very high-efficiency and high-volume wafer fabs. It is perilous and wasteful to interrupt a production process to inject a batch of test wafers. Most fab managers would rather have a tooth pulled than to change their flow to accept an experimental lot.
What can be done to improve this situation?
Well the folks at Intermolecular, Inc. (IMI) explained to the Memory Guy that they have a solution: They have built a small fab that allows single wafers to be processed with varying parameters across a single wafer. In this way one wafer can be used to run 36 or more different experiments all at the same time. This is clearly more economical than having to run the experiment on 36 wafers or, even worse, 36 batches of wafers! Intermolecular says that, while production fabs are optimized for manufacturing, their fab is optimized for materials understanding.
The firm calls itself an Continue reading “Accelerating New Memory Materials Research”
The previous post in this series (excerpted from the Objective Analysis and Coughlin Associates Emerging Memory report) explained why emerging memories are necessary. Oddly enough, this series will explain bit selectors before defining all of the emerging memory technologies themselves. The reason why is that the bit selector determines how small a bit cell can get, and that is a very significant component of the overall cost of the technology. Cost, of course, is extraordinarily important because no system designer would use a component that would make a system more expensive than it absolutely needs to be!
A number of the Memory Guy’s readers may never have heard of a selector. I’ll explain it here. It’s not complicated.
Every bit cell in a memory chip requires a selector. This device routes the bit cell’s contents onto a bus that eventually makes its way to the chip’s pins, allowing it to be read or written. The bit cell’s technology determines the type of selector that is appropriate: SRAMs use two transistors, DRAMs use one transistor, and flash memories combine a transistor with the Continue reading “Emerging Memories Today: Understanding Bit Selectors”
Non-silicon memory technologies have been studied for about as long as have silicon-based technologies, but the silicon technologies have always been preferred. Why is that, and why should anything change?
This is a question that The Memory Guy is often asked. The answer is relatively simple.
Silicon memory technologies benefit from the fact that they have always been manufactured on process technologies that are nearly identical to those used to produce CMOS logic, and can therefore take advantage of the advancements that are jointly developed for both memory and logic processes. In fact, before the middle 1980s, logic and memory processes were identical. It wasn’t until then that the memory market grew large enough (over $5 billion/year) that it could support any additional process development on its own.
Even so, memory processes and logic processes are more similar than different. This synergy between memory and logic continues to reduce the process development cost for both memories and logic.
Emerging memories depart from Continue reading “Emerging Memories Today: Why Emerging Memories are Necessary”
There’s never been a more exciting time for emerging memory technologies. New memory types like PCM, MRAM, ReRAM, FRAM, and others have been waiting patiently, sometimes for decades, for an opportunity to make a sizeable markets of their own. Today it appears that their opportunity is very near.
Some of these memory types are already being manufactured in volume, and the established niches that these chips sell into can provide good revenue. But the market is poised to experience a very dramatic upturn as advanced logic processing nodes drive sophisticated processors and ASICs to adopt emerging persistent memory technologies. Meanwhile Intel has started to aggressively promote its new 3D XPoint memory for use as a persistent (nonvolatile) memory layer for advanced computing. It’s no wonder that SNIA, JEDEC, and other standards bodies, along with the Linux community and major software firms are working hard to implement the necessary standards and ecosystems to support widespread adoption of the persistent nature of these new technologies.
This post introduces a Continue reading “Emerging Memories Today: New Blog Series”
There has been a lot of discussion in the trade press lately about new memory technologies. This is with good reason: Existing memory technologies are approaching a limit after which bits can’t be shrunk any smaller, and that limit would put an end to Moore’s Law.
But there are even more compelling reasons for certain applications to convert from today’s leading technologies (like NAND flash, DRAM, NOR flash, SRAM, and EEPROM) to one of these new technologies, and that is the fact that the newer technologies all provide considerable energy savings in computing environments.
Objective Analysis has just published a white paper that can be downloaded for free which addresses a number of these technologies. The white paper explains why energy is wasted with today’s technologies and how these new memory types can dramatically reduce energy consumption.
It also provides a Continue reading “Latest White Paper: New Memories for Efficient Computing”
Last year I stumbled upon something on the Internet that I thought would be fun to share. It’s the picture on the left from a 1978 book by Laurence Allman: Memory Design Microcomputers to Mainframes. The picture’s not too clear, but it is a predecessor to a graphic of the memory/storage hierarchy that The Memory Guy often uses to explain how various elements (HDD, SSD, DRAM) fit together.
On the horizontal axis is Access Time, which the storage community calls latency. The vertical axis shows cost per bit. The chart uses a log-log format: both the X and Y axes are in orders of magnitude. This allows a straight line to be drawn through the points that represent the various technologies, and prevent most of the technologies from being squeezed into the bottom left corner of the chart.
What I find fascinating about this graphic is not only the technologies that it includes but also the way that it’s presented. First, let’s talk about the technologies.
At the very top we have RAM: “TTL, ECL, and fast MOS static types.” TTL and ECL, technologies that are seldom Continue reading “Storage/Memory Hierarchy 40 Years Ago”
Objective Analysis has just released a new report covering the nonvolatile dual inline memory module (NVDIMM) market in detail. This report, Profiting from the NVDIMM Market, explains the What, How, Why, & When of today’s and tomorrow’s NVDIMM products.
My readers know that I have been watching this market for some time, and that I am always perplexed as to whether to post about NVDIMMs in The Memory Guy or in The SSD Guy, since these products straddle the boundary between memory and storage. This time my solution is to publish posts in both!
The Objective Analysis NVDIMM market model reveals that the market for NVDIMMs is poised to grow at a 105% average annual rate to nearly 12 million units by 2021. This finding is based on a forecast methodology that has provided many of the most consistently-accurate forecasts in the semiconductor business. This forecast, and the report itself, were compiled through exhaustive research into the technology and the events leading up to its introduction, vendor and user interviews, and briefings from standards bodies.
This 80-page in-depth analysis examines all leading NVDIMM types and forecasts their unit and revenue shipments through 2021. Its 42 figures and 14 tables help Continue reading “New Report Details NVDIMM Market”