Dubbed: “The Automata Processor” this chip harnesses the inherent internal parallelism of DRAM chips to support a parallel data path of about 50,000 signals to attain processor-DRAM bandwidth that can only be dreamed of using conventional DRAM interfaces. The processor is a Graph-Oriented architecture.
The chip lends itself to highly-parallel problems like video analytics and network security. It can perform 6.6 trillion decision paths per second, and is being prototyped on standard DDR3 DIMMs with two added signal lines.
The support software is already in place and greatly resembles place & route software. It optimizes problems like linear programming, and has both a code and a visual interface. Micron has named it ANML for “Automata Network Markup Language”.
This project, which Micron tells me has been in the making for seven years, has tapped into university research on parallel processing. Micron’s press release quotes Georgia Tech’s Chirag Dekate, and Michaela Becchi of The University of Missouri. The release also discloses Micron’s plan to establish the Center for Automata Computing at the University of Virginia. Dr. Becchi has applied for a patent on a similar-looking technology – this might shine some light on the Automata processor’s inner workings for those brave enough to read it.
The notion of putting processors onto memory chips is nothing new. Nearly two years ago The Memory Guy posted another story about a memory-based processor. This concept has been around for quite some time, however the market has never witnessed the proper confluence of markets and technology to bring it into being. Micron is banking on the idea that things will be different with the advent of Big Data.
During a briefing Micron’s Director of Automata Processor Development Paul Dlugosch said the company hopes to use the Automata processor to debunk the timeworn paroxysm: “Speed, Power, Cost – pick any two.” He anticipates that Automata will make it possible to pick all three. If he does, the chip could create a new market and drive a significant departure from the trajectory computing has followed for the past half century.