Infineon Introduces NOR with an LPDDR Interface

Block diagram of a system superimposed over a ghost view of an auto. The system shows a microcontroller communicating with a SEMPER X1 LPDDR NOR flash chip.Infineon recently introduced a NOR flash chip with an LPDDR interface.  Some clients have asked The Memory Guy: “Why would Infineon have done that?”

After all, LPDDR is mostly used in cell phones, and these boot from the enormous NAND flash that’s already in the phone.  A byte of NAND is a couple of orders of magnitude cheaper than a byte of NOR, so a cell phone’s not going to use this part.

Infineon tells us that their target market is Continue reading “Infineon Introduces NOR with an LPDDR Interface”

New White Paper: The Future of the Data Center

Fuzzy photo of the first page of the white paperThe Memory Guy is pleased to announce the availability of a new Objective Analysis Brief, which is our name for a white paper.  It’s called The Future of the Data Center.

The paper explores the new horizons of computing, including disaggregation, AI, IoT, etc., and explains the many different memory approaches that are being used or developed to enable these technologies, ranging from computational storage to DDR5 and CXL

Look for it at the top of the list of free documents on our White Papers page at Objective-Analysis.com.

 

Samsung’s Aquabolt-XL Processor-In-Memory (Part 2)

Sketch of a sledgehammer driving a wedge into a logSamsung has been strongly promoting its “Aquabolt-XL” Processor-In-Memory (PIM) devices for the past year.  In this second post of a two-part series The Memory Guy will present other companies’ similar PIM devices, and will discuss the PIM approach’s outlook for commercial success.

Part 1 of this series explains the concept of Processing in Memory (PIM), details Samsung’s Aquabolt-XL design, and shares some performance data.  It can be found HERE.


Samsung’s Not the First PIM Maker

This is not at all the first Continue reading “Samsung’s Aquabolt-XL Processor-In-Memory (Part 2)”

White Paper: The Future of Low-Latency Memory

Chart showing areas in the capacity-bandwidth space where DDR4, DDR5, HBM2E and OMI fitTom Coughlin and I have just published a new white paper that is now available on the Objective Analysis website.  It examines the way that  processors communicate with DRAM, and how problems that stem from loading get in the way of increasing speed.

We compare DDR against HBM (High Bandwidth Memory) and a newer Continue reading “White Paper: The Future of Low-Latency Memory”

Memory Market Falling, as Predicted

Memory Price & Cost BehaviorIt’s earnings call season, and we have heard of a slowing DRAM market and NAND flash price declines from Micron, SK hynix, Intel, and now Samsung.  DRAM prices have stopped increasing, and that can be viewed as a precursor to a price decline.

Samsung’s 31 October, 2018 3Q18 earnings call vindicated Objective Analysis‘ forecast for a 2H18 downturn in memories that will take the rest of the semiconductor market with it.

Those familiar with our forecast know that for a few years we have been predicting a downturn in the  second half of this year as NAND flash prices fall, followed by a DRAM price collapse.  After the DRAM collapse the rest of the semiconductor market will undergo a downturn.

We’ve been calling for this downturn for some time.  Dan Hutcheson at VLSI Research has been videotaping our forecast every December for the past Continue reading “Memory Market Falling, as Predicted”

Samsung’s Colossal 128GB DIMM

Samsung_128GB TSV RDIMMIn a November 25 press release Samsung introduced a 128GB DDR4 DIMM.  This is eight times the density of the largest broadly-available DIMM and rivals the full capacity of mainstream SSDs.

Naturally, the first question is: “How do they do that?”

To get all the chips into the DIMM format Samsung uses TSV interconnects on the DRAMs.  The module’s 36 DRAM packages each contain four 8Gb (1GB) chips, resulting in 144 DRAM chips squeezed into a standard DIMM format.  Each package also includes a data buffer chip, making the stack very closely resemble either the High-Bandwidth Memory (HBM) or the Hybrid Memory Cube (HMC).

Since these 36 packages (or worse, 144 DRAM chips) would overload the processor’s address bus, the DIMM uses an RDIMM protocol – the address and control pins are buffered on the DIMM before they reach the DRAM chips, cutting the processor bus loading by an order of magnitude or more.  RDIMMs are supported by certain server platforms.

The Memory Guy asked Samsung whether Continue reading “Samsung’s Colossal 128GB DIMM”

Why ST-MRAMs Need Specialized DDR3 Controllers

Everspin ST-MRAM press photoEverspin and Northwest Logic have just announced full interoperability between Northwest Logic’s MRAM Controller Core and Everspin Technologies’ ST-MRAM (Spin-Torque Magnetic RAM) chips. This interoperability is hardware proven on a Xilinx Virtex-7 FPGA and is now available for designs needing low-latency, high memory throughput using MRAM technology.

Since The Memory Guy knew that Everspin’s EMD3D064M ST-MRAM was fully DDR3 compatible, I had to wonder why the part would require a special controller – couldn’t it simply be controlled by any DDR3 controller?

Everspin’s product marketing director, Joe O’Hare, took the time to Continue reading “Why ST-MRAMs Need Specialized DDR3 Controllers”

Intel to Use Micron Hybrid Memory Cube

Micron: "Bursting Through The Memory Wall"Intel and Micron today announced that the new version of Intel’s Xeon Phi, a highly parallel coprocessor for research applications, will be built using a custom version of Micron’s Hybrid Memory Cube, or HMC.

This is only the second announced application for this new memory product – the first was a Fujitsu supercomputer back in November.

For those who, like me, were unfamiliar with the Xeon Phi, it’s a module that uses high core-count processors for problems that can be solved with high degrees of parallelism.  My friend and processor guru Nathan Brookwood tells me Continue reading “Intel to Use Micron Hybrid Memory Cube”

Spansion’s Super-Fast HyperFlash NOR

Comparing Spansion's HyperFlash against the speed of alternative interfacesSpansion recently introduced a NOR flash that the company boasts is the: “World’s fastest NOR flash memory”.  Named HyperFlash, the chip taps into high-speed SPI interface, doubling its width and adding a differential clock to run at an I/O rates as high as 333MB/s.

In this post’s graphic (click to enlarge) Spansion compares the HyperFlash chip’s sustained read rate (right-hand column) to that of (from left to right) asynchronous parallel NOR, single-bit SPI, industry-standard DDR Quad SPI, and Spansion’s faster rendition of DDR Quad SPI, which Spansion tells us, until now, has been the fastest flash on the market.  The company points out that HyperFlash is five times the speed of industry-standard Continue reading “Spansion’s Super-Fast HyperFlash NOR”

Rambus and Micron Sign License Agreement

The following is excerpted from an Objective Analysis Alert that can be downloaded from the company’s website.

Micron Licenses Rambus IPRambus and Micron announced on Tuesday that they have signed a patent cross license agreement.  Micron receives rights to Rambus IC patents, including memories.  Both Micron and Elpida products will be covered.  The companies have thus settled all outstanding patent and antitrust claims in their 13-year court battle.

Micron will make royalty payments to Rambus of up to $10 million per quarter over the next seven years, totaling $280 million, after which Micron will receive a perpetual, paid-up license.

Rambus and Micron both have Continue reading “Rambus and Micron Sign License Agreement”