Last year I stumbled upon something on the Internet that I thought would be fun to share. It’s the picture on the left from a 1978 book by Laurence Allman: Memory Design Microcomputers to Mainframes. The picture’s not too clear, but it is a predecessor to a graphic of the memory/storage hierarchy that The Memory Guy often uses to explain how various elements (HDD, SSD, DRAM) fit together.
On the horizontal axis is Access Time, which the storage community calls latency. The vertical axis shows cost per bit. The chart uses a log-log format: both the X and Y axes are in orders of magnitude. This allows a straight line to be drawn through the points that represent the various technologies, and prevent most of the technologies from being squeezed into the bottom left corner of the chart.
What I find fascinating about this graphic is not only the technologies that it includes but also the way that it’s presented. First, let’s talk about the technologies.
At the very top we have RAM: “TTL, ECL, and fast MOS static types.” TTL and ECL, technologies that are seldom currently encountered, were the mainstay of electronics in the ’60s & ’70s. TTL ran up to about 50MHz, and for faster speeds ECL was used, but was a very difficult and costly kind of logic, so most designers weren’t exposed to it. The “MOS static types” were that era’s version of today’s common SRAMs. The next tier down was “4-k and 16-k MOS dynamic types,” commonly known today as DRAMs. Today’s 4Gb DRAM chip contains one million times the number of bits as its 4kb predecessor of the 1970s.
Below these two technologies things get really different. “CCD memory (low latency, short register)” is the same technology that was used a little over decade ago for most image sensors, but in this case it was used for memory chips. These CCDs would shift bits through an array of shift registers: If you needed a certain piece of data you had to wait until it came around to the I/O port. You might note that this was very similar to the way that an HDD works. CCD memories were intended to be cheaper than DRAM but was a lot slower. Although the book calls them out, they didn’t find much acceptance and were short-lived.
Below that are bubble memories: “Bubbles (long latency, long register).” These involved longer shift registers and promised to be even cheaper than CCD. Unfortunately bubbles were only available for a very short period and proved too challenging to manufacture so the technology vanished from the market.
You might be asking: “What the heck is a bubble memory?” This was a magnetic memory constructed by depositing a metal pattern on top of a magnetic substrate. A current in the metal would form a magnetic “bubble” (a localized domain of magnetization), and other currents pushed the bubble from stage to stage in a shift register. It used principals that were well-understood in the magnetic recording industry. Some of these same principals are still used in MRAMs. IBM’s Racetrack Memory uses currents the same way to move magnetized domains around in a strip of magnetic material.
Finally, at the very bottom of the chart, is “Moving Head (magnetic)” which is another way to say “HDD.” This shows that HDDs have a lot of staying power. Of the seven technologies in this chart (ECL, TTL, and MOS SRAMs, DRAMs, CCDs, Bubble Memories, and HDD) only SRAM, DRAM, and HDD remain, and the SRAM in computing systems is embedded within the CPU chip.
So let’s move to the way that these technologies are presented. Along the X-axis access time is in seconds, with 1 second on the far right, followed by 10-5 and 10-8 seconds. I couldn’t even begin to explain why these figures were chosen, since they aren’t equally separated. In today’s way of speaking 10-5 seconds is 10 microseconds (10μs) and 10-8 seconds is 10ns. The Y-axis displays cents/bit, with marks of 1 at the top, 10-2 (1/100th), 10-4 (1/10,000th), and 10-6 (1/1,000,000th). Today we would express these same numbers in dollars per gigabyte. The top would be $10 million/GB (you read that right), followed by $100K/GB, $1K/GB, and $10/GB. DRAM today sells for around $7/GB, so it compares today to the prices that were being paid for HDD in 1978.
In the chart below I have added modern metrics to the older chart to make it a little easier to understand:
The industry has moved a very long way over the last 40 years, but I need not mention this to anyone who’s involved in semiconductors. In 1978 a Silicon Valley home cost about $100,000, or about the cost of a gigabyte of DRAM. Today, 40 years later, the average Silicon Valley home costs about $1 million and a gigabyte of DRAM costs about $7.
There was one more somewhat similar chart in this book that caught my eye and it appears below.
This one puts specific technologies and densities for DRAMs and SRAMs onto the chart. For a change this chart is semilogarithmic and shows the access time in nanoseconds along the X-axis. Although the memories at the very top of the chart are simply labeled “static” I assume that these are ECL. Next come the TTL memories, and after that I²L which wasn’t mentioned before. This is current injection logic, a type of logic that was not widely sourced and had a relatively short life. Below the I²L devices you will find “C-MOS on sapphire,” a predecessor to today’s silicon on insulator (SoI) technologies that was also short-lived, largely due to its high cost. What I find interesting in this chart is the relationship between the dynamic (DRAM) and static (SRAM) technologies. Today’s SRAMs are all significantly faster than their DRAM counterparts, but this appears not to have been true in 1978.
When I found the book a pdf was available online, but that pdf is no longer on the McGraw Hill website, and a few web searches for the book’s title and author came up with nothing. It may have been lost to the sands of time.
One thought on “Storage/Memory Hierarchy 40 Years Ago”
Comments are closed.