A dean of high performance computing says silicon is at the end of the line.
High Performance Computing expert Thomas Sterling would like you to know that a computing goal you've never heard of will probably never be reached. The reason you should care is that it means the end of Moore's Law, which says that roughly every 18 months the amount of computing you get for a buck doubles.
Or at least, the end of Moore's Law-style advances in the processing power of the world's biggest supercomputers. For a while now, every 11 years or so, the planet's smartest and best-funded computer scientists have managed to produce a supercomputer that's 1,000 times faster than its predecessor. In 1999, we reached teraflops-scale computing, or a trillion (10^12) floating point operations per second. In 2008, Los Alamos' Roadrunner supercomputer reached petascale computing, or a quadrillion (10^15) floating point operations per second.
In a mind-blowingly jargon-rich interview with HPC Wire, Sterling doesn't just assert that Zeta-Scale (10^21 FLOPS) computing is impossible, he also makes it seem pretty unlikely we're going to reach the next milestone, Exascale computing (10^18 FLOPS) without ripping apart our existing ways of building supercomputers, root and branch.
To read more, click here.