Samsung’s release tells us that the SuperMUC, the most powerful supercomputer system in Europe, is an IBM System x iDataPlex dx360 M4 server built using over 18,000 Intel Xeon CPUs and over 80,000 4GB DRAM modules from Samsung. (Simple math makes this out to be 82,944 modules.)
That looks like a lot of silicon! Let’s see how much that might be.
A 4GB parity DRAM module would use nine 4Gb DRAM chips, which Samsung appears to manufacture using a 35nm process. This means that the 82,944 DIMMs in this system would consume 746,496 chips.
Objective Analysis estimates that 770 of these chips would fit onto a single 300mm wafer, so 970 wafers would be required to build the DRAM for this system. Most of Samsung’s DRAM wafer fabs run about 60,000 wafers per month (or 2,000 wafers per day), so this single system has consumed only about one half day’s output from one of these fabs.
What does this look like from a financial standpoint? At a price of $10.45 per GB (an average for last year) this would have put $3.5 million into Samsung’s pockets. Our model puts the manufacturing cost of each chip at $2.10, which would have given Samsung a gross profit of about $2 million. It is likely that this DRAM actually sold for significantly less – today DRAM sells for as little as $3.24 per gigabyte, which would translate to a $320,000 loss for Samsung.
Compare the $3.5 million DRAM cost to the cost of the system’s 3.52 megawatts of power consumption, which would be somewhere on the order of $3 million per year even before cooling costs were considered, yet this system is believed to be the most energy efficient x86-based system in the world. It’s easy to see why saving energy in a system like this really matters!