Review: Dark Silicon and the End of Multicore Scaling

Paper link: p365-esmaeilzadeh

Premise

Single core scaling stopped some number of years ago. But we’re okay, because we now have the “multi-core revolution” (I realllly hate that phrase). Or are we?

It turns out that another, very real wall is blocking future progress — power density. Already CPU designs strive mightily to shutdown or slowdown underutilized circuitry to save power. But we’re reaching the point where processors simply won’t ever be able to run “full-out”.

This paper explores the future of scaling by extrapolating based on a wide range of processors over time, and they find, rather compellingly, that we’re pretty much boned.

Details

The paper is pretty dense, but it boils down to the creation of a model parameterized by processor features:

  • number of cores
  • number of threads
  • CPU frequency
  • L1/L2/DRAM cache size, speed, miss rate
  • DRAM fetch width

and code features:

  • % code that is parallel
  • % read instructions

You can fit this model based on existing processors and their benchmark performance, as well as their power consumption and die area.

Once the model is created, now the fun starts — we can see how performance will scale with future process improvements (moving to 32, 22, 16…nm), or if we assume highly parallel code, we can see how to optimize our CPU design (lots of cores)!

The short of it

If we make optimistic assumptions for how power consumption drops with process scaling, we still end up with the sobering conclusion that we’re rapidly reaching the point where we won’t be able to power all of the parts of a chip. 128 cores aren’t as useful if we can’t keep them all running.

On the plus side, I can buy 1000 watt power supplies now, so I’m looking forward to my 16 processor, 16 core machine of the future. Welcome to the multiprocessor revolution!

Leave a Reply

Your email address will not be published. Required fields are marked *