In this final part of a five-part series, contributor Ron Neale continues his analysis of selector technologies focusing on the nature of the mystery of Forming and a number of the many unanswered questions.
Any search for Forming-Free structures might find some help in the article by Antonin Verdy of Leti titled: Optimized Reading Window for Crossbar Arrays Thanks to Ge-Se-Sb-N-based OTS Selectors. This article also points to a possible cause for the difference between the values of Vf and Vo. The team at the CEA Leti MINATEC campus measured Forming and operating threshold voltages as a function of Sb content for GeSeSb (GSS) selector devices. When I replotted their published results they appear to show a small indication that, as the Sb content of the glass increases, the difference between the Forming voltage and operating voltage (Vf – Vo) decreases.
This might suggest, perhaps not unexpectedly, that it is the semiconductor chalcogenides in the glass which make the greatest contribution to the (Vf – Vo) difference, leading to the conclusion that Forming is more of a contact effect than a bulk effect, i.e. when as the Sb content is increased there are fewer atoms of Se and Te in contact with the electrodes.
There is some evidence in the literature that, in some chalcogenide devices, antimony is the most mobile of the various elements and this might suggest a contrary explanation. When the antimony moves away from one electrode the concentration builds up at the other electrode.
Let us assume for the moment that new experimental evidence can establish that an increased Sb content reduces the threshold voltage difference. This would suggest that the circuit could avoid using a higher Forming voltage by constructing a three-layer selector where a thin layer of antimony might be positioned between the GSS and the metal electrodes as shown in the figure below.
This would show whether it is possible to explain the changes in threshold voltage during Forming by the composition changes caused by electromigration alone. A change in composition in the direction of current flow effectively results in two devices in series with different threshold voltages, the sum of which is less than the Forming voltage Vf.
Then why not use Forming as a virtue? We could search for a material composition where, after Forming, the sum of the threshold voltages of the planar regions (where electro-migration has caused a change in composition) compensate for each other resulting in a total threshold voltage equal to the Forming voltage (Vf). We can do this if we assume that Forming is the result of a large single-cycle electromigration effect.
In the early development of PCM researchers attempted to find a material composition which could use planar layers, in the manner described in the previous paragraph, to compensate not for Forming, but for reductions in the threshold voltage that increased with the number of switching cycles. This approach was based on the assumption that the threshold voltage reduction resulted from composition changes caused by the effects of electromigration. The less-than-successful outcome of those experiments, with which this writer was involved, was most likely because element separation in PCMs was later found to be a 3D effect, as discussed in Padilla’s IEDM paper: Voltage polarity effects in GST-based Phase Change Memory: Physical Origins and Implications which I reviewed back in 2011 in EE Times HERE.
From the very earliest days of PCM development Forming was recognized as a serious problem even for memory devices, which, because of their large process geometries, had to be fabricated in the amorphous state. This was at a time when the bit selectors were diodes or transistors. A solution to remove the high Forming voltage was described in the 1978 Grady Wood patent, where the memory material of the same composition is deposited in two layers, one amorphous and one crystallized. The Forming voltage of the amorphous layer is the same as the operating voltage of both layers in series when the crystallized layer is reset, as is illustrated in the figure below.
Discussion and Observations.
This post shows that scaling considerations must be added to the physical models of the all-purpose model I described in Part 3 of this series. We have now moved into a world where selector and PCM device scaling is more about the number of atoms involved than about some physical dimension. By that metric today’s selectors and PCMs are already well into the sub-1Megatom territory.
In that new world, when terms like melting, crystallization, and phase change are linked with percolation paths, we must consider what might be the minimum dimensions of a separate feature in an amorphous matrix. For example, in the sub-1Megatom device world it is suggested that the percolation paths in the two proposed models shown above would have a minimum possible size of the order 2nm diameter, which, in a 20nm device, would be less than 10 atoms wide. All is not lost, for an understanding of the concept of melting at the dimensions of a few atoms “Configurons” (strings of broken bonds) may have the answer. For the moment I will leave that to others.
From CEA-Leti’s work on selectors, the starting point for this series of articles, it is clear that individual homopolar bonds are the bad guys even in larger-size structures, and heteropolar bonds are the good guys in terms of structural stability. This means that it may not be out of the question, in the limit of device dimensions, that the idea that a line of regions or islands of permanently broken bonds, or different bonding types, could provide a more conducting Formed percolation path.
If Forming is not the result of high current density then the stumbling block is a lack of an agreed-upon explanation of threshold switching and which conduction mechanism dominates when threshold switching occurs. In analogy the situation might be considered similar to the partial-pressure gas laws in another part of physics, where, for amorphous solids, the observed conduction at any applied field is the sum of a number of different types of conductivity. It might even be possible to attribute each mechanism of conduction to the presence of a particular element or bond type, again in the manner of the partial pressure gas laws acting independently and uncorrelated.
Real-world memory arrays have an added complication, since each memory cell consists of a PCM and selector in series. Forming does not involve the selector alone, but must consider the behaviour of the PCM. In that situation when the first operation is a “read” then the selector will be Formed by the read current. If the first operation is a “reset” then the selector will be Formed with the much larger reset current required to return the PCM material from its crystal to its amorphous state. Add to that the range of different temperatures over which those steps might occur. Some mitigation might be found in a read-before-write methodology, which would guarantee consistent conditions for Forming and would incidentally remove the longer Set time (i.e for crystallization) from the write sequence for some of the occasions when a “Set” state is required.
The design of today’s stacked memory arrays force the area of the selector to be the same as that of the memory device. This is complicated by the fact that the reset current must have a sufficient amplitude to return the whole volume of the PCM to its reset state. The latter is an essential requirement to avoid the combined results of electro-migration and electrostatic effects which would serve to enhance or reduce the composition of one element at any solid-liquid interfaces in the memory device, as I explained in an earlier Memory Guy post.
The common area requirement and the need to reset all of the volume on the memory material introduces an additional constraint, which is that of finding a selector with electrical characteristics which are a perfect match to the memory reset current requirements. That means that the only variables will be an increased reset time with reduced reset current, or reducing the thickness of the selector to ease any power dissipation problems.
In the modern era, with threshold voltages of selectors in the range 1 to 3 volts, the difference between the Forming and operating threshold voltages are in the range 0.3 to 0.9 volts. However, at that level of scaling even those small absolute values still extract a proportional performance penalty when circuit designers engineer devices around them.
To give you my personal wish list, there are a number of experiments that I think might be worth pursuing and which might help in solving the mystery of what happens during Forming.
High on that list, and a key experiment, would be to reduce the area of the device to the dimensions of the Formed region. If the idea of an initiating link as a precursor to threshold switching is corrected then in these very small devices it should be possible to directly observe electrically the growth of any initiating link. Whereas in a larger area devices it remains hidden almost until the two electrodes are bridged. The experimental solution to reducing the selector’s area (and capacitance) might be to use lateral “gap” structures. The experiment would then be to measure the Forming voltages for lateral or “gap” structures, and compare them with the Forming voltages of a planar “pore” structure with the same selector material and same electrode spacing.
This would naturally lead to a subset of Forming-related experiments to establish if there is a difference between the Forming which occurs during threshold switching, and the Forming which occurs during a reset current pulse level to reset PCMs from the crystal state.
This would require fabricating the same type devices in both the crystallized and amorphous state and measuring the operating threshold voltages to find if there are any significant differences from different Forming methods and from different applied currents. The all-purpose model, if correct, does reconcile the Forming which occurs during reset with that which occurs during the initial threshold switching event, both of which are driven by high current density.
A useful, and possibly enlightening, experiment would be to explore the change in leakage current after Forming as a function of the device area. If constant it might suggest, or even provide evidence, that a percolation path is the result of Forming.
Now that a number of techniques have been developed for high resolution, high speed detailed analysis of the smallest device structure, it should be possible with a little more development work to resolve what actually happens during the Forming of selectors without any bias of the results for commercial considerations. Again, lateral structures might offer some help with that type of experiment.
While not directly linked to Forming another experiment on my wish list would be to observe the DC performance of selectors. For example: What happens if a selector is switched and left in the conducting state with a current just above the holding current? An equivalent test would be to use the selector in a free-running low current negative resistance oscillator as a test of endurance. It is a mystery why there is very little in the literature reporting such tests, especially now that selectors can be constructed in a much more repeatable manner.
In the past it was possible to ring microwave cavities and generate “chirp” pulses. It might be worth repeating that work using the rapid switching transitions of threshold switches and selectors, with the possibility that there are new applications are waiting in the wings.
I would also like to see a COMSOL thermal simulation model which includes a discontinuity in electrical conductivity at any selected temperature as well as locally in any hot spot.
My purpose in this article was to raise for discussion and highlight the questions relating to selector Forming. My starting point was the CEA-Leti paper which provided much by way of useful information on Forming and moved us away from the past where euphemisms like “initial seasoning procedure,” “initialization,” and “electrical activation” were used to hide any negative aspects of the need for Forming.
The idea of electrically making a device in situ after you have carried out the physical memory array processing and fabrication, while at the same time not understanding the driving forces and what is actually happening during that final manufacturing step of Forming, hardly inspires confidence. The lack of a clear understanding of the nature of the changes which occur during the act of Forming may also extract a performance penalty and could well be the reason why the established levels of endurance for selectors and PCMs are not yet met by today’s commercial PCM memory arrays.
I suggest that all those who want to add new possibilities to the long list of options that attempt to explain threshold switching in chalcogenide-based selectors and PCMs must include an explanation of how Forming fits into those theories.
The removal of the need for Forming might be helpful in the design of any future PCM and other types of memory arrays, as well as removing any as-yet hidden problems of reliability and limited performance. Of necessity I have only been able to provide a brief summary description of much of the important and impressive work in the references; any shortcomings in that respect are mine not theirs.
This post is the final part of a five-part series on 2-terminal chalcogenide-based memory selectors and Forming authored by Ron Neale and published in The Memory Guy blog in February-November 2019. The links below will take you to each part of the series:
5 thoughts on “NV Memory Selectors: Forming the Known Unknowns (Part 5)”
So does this mean Intel has some way of Gb to Tb-scale forming for its Optane products?
Frederick, difficult question.
If we go back to Intel’s original 3DXPoint/Optane announcement at the time, there was some suggestion they had plans, or at least suggested the possibility, of adding further layers to the 3D stack. That would give a factor of 4. Combine that with lithographic scaling by a factor of say 3 and that will get your Optane 16GB to a monolithic Terabit density (Tb). More easily said than done.
At the moment, in large-array format, we do not know if it is possible to scale PCM and threshold switches by a factor of three, that would be a 7nm device, without compromising any of its characteristics, especially write endurance. Then there is the problem of temperature: if you assume for erase the PCM is still melting at around 600°C then scaling brings all sorts of thermal problems. Sure, you can use thermal management algorithms for controlling the distribution of the erase operation, but in this futuristic scaled PCM array the devices are only a few atoms apart.
Having solved all the potential problems associated with scaling to T-bit levels I do not think the need for Forming of both the PCM and threshold switch will go away, or will its possible impact on reliability in relation to the time and conditions when it is carried out.
Will the present method of dealing with it suffice? Who knows? All and any of the above will cost a new sack load of additional money and which raises the question: “Are Intel and any partners prepared to make that investment?”
Other possibilities. A great deal of progress has been made in separating the thermal and electronic components involved in the switching of PCMs and threshold switches. Let us for the moment assume from that work, what I will call a purely bond switching-based device can be developed, which does not require Forming, then NV Terabits might become a real possibility.
The other alternative would be to switch the memory part of the Optane array to a different technology, a bidirectional switching device like a ReRAM might be difficult, although not for the threshold switch matrix isolation device part of the stack. At the moment only one technology on the horizon offers the right combination of simple structure, single material film (carbon doped TMOs), unidirectional switching device, and NV memory. That would be the correlated electron memory CeRAM, which would remove the need for elevated temperature melting. As it appears to be a bulk switching device, it has an ability to scale . (More new information here on that device later – work in progress).
The real answer to your question is” I do not know what Intel plan to do with Optane or how it fits into any of their future NV memory plans, least of all with respect to monolithic Tbs or TB.
I have been reading with great interest this series of 5 articles on forming.
The third article had excellent research of the literature which made this discussion really anchored on excellent articles on physical models and device structure.
In this comment, I would like to clarify two points that were brought out in Ron’s answer to Frederick’s question.
First, the structures as mentioned in the original papers by Cohen, Kroll and Cohen, and De Gallo et al. in Part 3 of this series, are very relevant to both ReRAMs and PCMs. Although my knowledge of PCM is very limited, the structures of both technologies invoke the percolation paths which can be across two phases (like amorphous and crystalline) or via vacancies. The general model that Ron showed in part 3 is a visual description that pretty much covers the physics of percolation. The original understanding of forming was first based on the percolation model by Mann et al. in the sixties, and then analytical formulas were derived on the basis of those simulations.
Recently, an excellent paper by Mann et al. ( https://arxiv.org/pdf/2006.06744.pdf) entitled “Percolation in random graphs with higher-order clustering” covers, in my view, the seed or multiphase problem for growing the percolation cluster. When it comes to increase in temperature, I would recommend the model by Neville Mott, which is an analytical model based on the percolation simulation called VRH – Variable Range Hopping.
In this model, electrons near the Fermi Level, but locked into oxygen vacancies, hop from trap to trap. The current then is proportional to the hopping energy to an exponential function to the 4th power. The temperature rises dramatically in this process as the electric field provides the energy for the hopping range. In some cases, the device actually melts, and this is misunderstood as fatigue, when in fact it could best be described as extreme fatigue. The point that VRH arrives at is similar to Ron’s fourth stage in his figure.
Now, this hopping can come from many sources – bulk defects, phases boundaries like Amorphous/Crystalline, and poor quality sidewall etching.
In two-layer ReRAM, which tries to match the crossover field with the forming field, the impression of “forming free” behavior, does not resolve the forming problem, it is just that the Set field and the crossover (through the two materials interface) are the same.
The problem shows up later as the filaments, due to forming, are still there in the non-stoichiometric or amorphous part of the device structure.
Given this background, I can now augment Ron’s comment with respect to CeRAM, the technology developed by Symetrix and licensed to ARM (Cerfe LAbs).
The homogenous structure of CeRAM and the Carbon-doping of the TMO eliminates the forming simply by being “Born” as completely Ohmic. That is, from zero Volts until Vreset, V= IR! (Some say that naively we just did not notice that the carbon are making filaments – unfortunately at below 1at%, which indicates traditional doping as in silicon, this is impossible.)
On thousands of samples, the “Forming of percolative paths (filaments)” is just not there. The bond between the metal and the Carbon makes the material amphoteric and the final behavior is that the insulator phase is a p-type semiconductor. Thus, without filaments and the metal-insulator occurring all over the bulk, the CeRAM device is not suffering from percolative paths or bulk hopping. However, as with any of these devices, bad sidewall etching can cause hopping and heating….but then, this process problem depends on clever device structures and process technology. An update on how this gives CeRAM its unique attributes may be coming in many publications and I hope Ron will do his magic in explaining it in one of his articles in which in plain language we can all understand.
Thank you so much for a thorough analysis from a leading researcher in the field.
The Memory Guy wishes you every success!
Comments are closed.