First, there were massive mainframes. Then came desktop computers in “towers,” followed by rackmount “pizza box” servers, and ultimately blade servers. What’s next for server design, and how do data center operators prepare for it?
“The equipment is emulating the cloud,” said Don Beaty, the founder of DLB Associates, which built many of Google’s data centers. “It has no shape. It’s constantly changing.”
The future shape of server technology was the focus of a panel at the recent DataCenterDynamics conference in New York. Beaty and other panelists noted that the pace of change is accelerating in the sector.
The drive towards more energy efficient equipment has altered the data center environment, Beaty said, with a focus on not only using less cooling, but managing it more precisely. That means data centers are operating at warmer temperatures, and using aisle containment and chassis design to optimize airflow to components that generate heat.
“With this wider temperature range, all bets are off,” said Beaty. “We need to think in terms of the possibility of an entirely different form factor.”
‘Dramatically Different’ Form Factors Possible
Joe Kava, the Senior Director of Data Centers for Google, agrees with that assessment.
“Servers are going to be very different in the future than they are today,” said Kava. “I think servers will look dramatically different. So don’t build a lot of critical infrastructure assuming that things that are going to remain the same.”
What does that mean for Google, whose data centers house tens of thousands of servers?
“We have many different programs researching new technology, and analyzing which is going to make it,” said Kava. “It takes 12 to 24 months to build a data center, so what you build for? What you do is treat the data center as a computer and make it ubiquitous compute.”
Kava said the growth of computing clouds and the huge data centers that support them has forced designers to think differently.
“We’re talking about a holistic approach,” said Kava. “The cloud will help drive some greater efficiencies simply because it’s a more efficient platform. The notion of the rack being the building block is not going to remain the case.”
Some panelists saw the move toward server density and packed racks could drive w wholesale shift toward design priorities more common in supercomputers and high-performance computing.
“I think there’s a lot of people interested in direct liquid cooling,” said Zahl Limbuwala, CEO of Romonet and head of the BCS Data Centre Specialist Group of the British Computer Society. “How that looks is yet to be determined. We’re building a 25 year asset to house assets that are refreshed every 18 to 36 months. I think we’re at a turning point. In five years time, what’s in the data center will look entirely different.”