My 7 year old daughter anxiously anticipates seeing who will win the top spot on “Dancing with the Stars.” I know the feeling – twice a year I revel in reading about the fastest and most powerful supercomputers in the world. Supercomputers are built to enormous scale in order to tackle some of the most compute-intense applications and needs of science.
The Top 500 list for November 2009 features two supercomputers now in the petaflop per second performance speed range. A petaflop is one quadrillion calculations per second. The Roadrunner system at Los Alamos was the world’s first petaflop supercomputer in 2008, but was edged out in the last round by the Department of Energy’s Jaguar supercomputer, posting a 1.75 petaflop/second Linpack benchmark.
A petascale computer is one that is capable of reaching performance speeds in excess of one petaflop. Additionally, it could refer to storage capacity in excess of one Petabyte. The potential for petascale computers has been discussed since 2006, as shown by this Google news timeline, and this Google trends depiction of search trends for the term.
One quadrillion calculations per second would seem to be enough power to tackle the most demanding applications and scientific or research queries, but Network World reports that discussions at the November SC09 supercomputer conference focused on the need for Exascale computers. An example of applications requiring exaflop level calculations are high resolution climate models and energy studies such as the International Thermonuclear Experimental Reactor. An exaflop is one quintillion floating point operations per second.
HPC wire ran an article last week on the strategy and technical specifications IBM is pondering to reach the exascale level of performance. Also last week, CNET news discussed IBM’s new Power 7 chip and the Blue Waters supercomputer, a collaboration between IBM and the University of Illinois and NCSA (National Center for Supercomputing Applications) that will be the largest publicly accessible supercomputer upon its launch in 2011.
Heading Towards 16 Petaflops
Blue Waters will connect 16,384 IBM Power7 chips for a total theoretical performance of 16 petaflops. The Power7 chip will integrate eight processing cores in one chip package and each core can execute four threads, making it a virtual 32-core processor. The chip combines the flagship power chip design and a separate ‘cell’ processor which was implemented in the IBM RoadRunner system at Los Alamos lab.
We’ve previously highlighted some of the challenges presented by the trend towards multi-core chip complexity. The aforementioned HPC Wire article points out that focus is shifting to the I/O and memory subsystems and quotes IBM’s vice president for Deep Computing Dave Turek saying “for exascale systems, our calculations are that the memory subsystem, left to its own devices, would be consuming on the order of 80 megawatts of power… the power draw by the system interconnect would be roughly the same.”
Power draw and energy efficiencies are what makes this project interesting. Data Center Knowledge first reported that EYP Mission Critical Facilities (now a part of HP) would lead the design effort for the Blue Waters petascale computing building in January 2008. In an interview on the NCSA web site IBM Fellow Ed Seminaro pointed out that they were able to collaborate with the University of Illinois and NCSA before the building had been drawn on blueprints, allowing them to customize the design and server racks to accommodate the specific requirements.
Skipping the Traditional UPS
Seminaro also elaborates on infrastructure features such as water-cooled racks, 5,400 tons of chilled water and 24 megawatts of power. Dave Ohara from Green Data Center Blog picked up on the story and noticed that the building cost was $3 million per mW and will apparently operate without a traditional uninterruptible power supply (UPS). The Illinois Petascale Computing Facility web site states that the facility will “take advantage of the campus’ highly reliable electricity supply, avoiding the need for the standard back-up Uninterruptible Power Supply.”
The 88,000 square foot facility will have 20,000 square feet for data center space and 10,000 square feet for other infrastructure. While the facility cost is $72.5 million, the supercomputer will cost $208 million to build, and is being funded by a grant from the National Science Foundation. The Daily Illini has an article on the supercomputer, budgets and the $12 million that the University of Illinois will chip in to build the $72.5 million facility.
Almost equally interesting for the Internet industry and another computing architecture of massive scale is the paper put out earlier this year by Google’s Urs Holzle and Luiz Andre Barrroso on the Warehouse-Scale computer. The 120 page paper elaborates on many important topics and concepts for the Warehouse-scale computer, and talks about companies like Google, Amazon, Yahoo and Microsoft and how their data centers differ by belonging to a single organization, use a relatively homogeneous hardware and system platform, and share a common systems management layer.
Irving Wladawsky-Berger wrote what my absolute favorite blog post of 2009 in talking about the data center in the Cambrian Age, a geological period that marked a profound change in life on Earth. The data center Cambrian Age may indeed be upon us.