Posted By Rich Miller On December 10, 2010 @ 8:29 am In HPC | 2 Comments
Now that major supercomputers exceed a petaflop per second in performance – or one quadrillion calculations per second – supercomputing researchers have been discussing the potential for “exascale computing.” Experts in high performance computing say exascale computing is attainable, but will require dramatic changes in both hardware and software design, as outlined in an article at the Institute for Engineering and Technology  (link via InsideHPC ).
What’s the primary challenge? A familiar story for data center professionals: the power bill. “There are a few very hard problems we have to face in building an exascale computer,” explained Wilfried Verachtert, high-performance computing project manager at Belgian research institute IMEC. “Energy is number one. Right now we need 7,000MW for exascale performance. We want to get that down to 50MW, and that is still higher than we want.”
Yes, that’s 7 gigawatts of power for an exascale computer. Verachtert says that’s enough power to keep 14 nuclear reactors running. How does it compare to today’s data center power usage? It’s more than 100 times the power required to operate The 700,000 square foot Microsoft data center in Chicago , which uses about 60 megawatts of power.
Read more at the IET web site .
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2010/12/10/exascale-computing-gigawatts-of-power/
URLs in this post:
 article at the Institute for Engineering and Technology: http://kn.theiet.org/magazine/issues/1018/exascale-supercomputers-1018.cfm
 InsideHPC: http://insidehpc.com/2010/12/10/power-consumption-is-the-exascale-gorilla-in-the-room/
 Microsoft data center in Chicago: http://www.datacenterknowledge.com/inside-microsofts-chicago-data-center/
 Rich Miller: http://www.datacenterknowledge.com/archives/author/richm/
Copyright © 2012 Data Center Knowledge. All rights reserved.