Processing in Memory

UPMEM Processor-in-Memory at HotChips Conference

UPMEM DIMMs in a ServerThis week’s HotChips conference featured a concept called “Processing in Memory” (PIM) that has been around for a long time but that hasn’t yet found its way into mainstream computing.  One presenter said that his firm, a French company called UPMEM, hopes to change that.

What is PIM all about?  It’s an approach to improving processing speed by taking advantage of the extraordinary amount of bandwidth available within any memory chip.

The arrays inside a memory chip are pretty square: A word line selects a large number of bits (tens or hundreds of thousands) which all become active at once, each on its own bit line.  Then these myriad bits slowly take turns getting onto the I/O pins.

High-Bandwidth Memory (HBM) and the Hybrid Memory Cube (HMC) try to get past this bottleneck by stacking special DRAM chips and running Continue reading

Micron Announces Processor-In-Memory

Micron's Automata Processor on a standard DDR3 DIMM (Micron press photo)During the Supercomputing Conference in Denver today Micron Technology announced its new twist on processing: A DRAM chip with an array of built-in processors.

Dubbed: “The Automata Processor” this chip harnesses the inherent internal parallelism of DRAM chips to support a parallel data path of about 50,000 signals to attain processor-DRAM bandwidth that can only be dreamed of using conventional DRAM interfaces.  The processor is a Graph-Oriented architecture.

The chip lends itself to Continue reading

A Change to Computing Architecture?

Venray's TOMI Die LayoutI got a phone call yesterday from Russell Fish of Venray Technology. He wanted to talk about how and why computer architecture is destined for a change.

I will disclose right up front that he and I were college classmates.  Even so, I will do my best to give the unbiased viewpoint that my clients expect of me.

Russell is tormented by an affliction that troubles many of us in technology: We see the direction that technology is headed, then we consider what makes sense, and we can’t tolerate any conflicts between the two.

In Russell’s case, the problem is the memory/processor speed bottleneck.

Continue reading