There are two principal classes of data center customers. First, there’s service providers, whose consumption patterns are relatively rigid and whose requirements are spelled out in the SLAs. Second, there are enterprises, whose utilization and resource usage patterns — due in large part to the cloud service delivery platforms upon which they rely — can be all over the map.
Should a data center provider compartmentalize its operations to serve the needs of both customer classes separately? Or should it instead implement a single design that’s flexible, elastic, and homogenous enough to address both classes — even if it means deploying more sophisticated configuration management and more hands-on administration?
“In a multi-tenant world, you design for the latter,” responded Dave Leonard, ViaWest’s chief data center officer. “And even in a single-tenant world, I’m convinced that it’s the wrong answer to go for the former.”
Leonard will explain in detail how cloud computing consumption patterns have been affecting data center design this Thursday at the Data Center World Conference in New Orleans.
Many of the major data center providers in today’s market are currently inclined to center their design efforts around one big template — for instance, a 10,000 square foot, single-tenant hall with 1100 kW of UPS power, he said. Realistically, it isn’t practical for such a provider to make such a facility multi-tenant, Leonard argued.
More on companies with flexible-capacity data center design ideas:
- Modular Cooling System Enables On-Demand Data Center Capacity
- ICTroom Unchains Capacity from Size in Modular Data Centers
“So say you get a software-as-a-service company. They can only buy one thing: 10,000 square feet and 1100 kW. And on day one, that might fit their needs perfectly, or maybe they can architect their application to where that’s perfect. But what happens when they re-architect their application and their hardware, and now they consume double the watts per square foot?
“Well, they’ve just stranded half of that space,” Leonard answers himself. “Who pays for that space that’s stranded? Well, they have to pay for it, because there’s no flexibility there.”
Now, certain well-known data center customers — Leonard cites Akamai as one example — are moving from a 12-15 kW per rack power usage profile down to about 9 kW/rack. Service providers are capable of making such deliberate changes to their applications to enable this kind of energy efficiency.
Suppose a hypothetical SP customer of this same data center is inspired by Akamai, re-architects its application, and lowers its power consumption. “Well, now they can’t use the power that’s in that space,” argues Leonard.
“Creating space where power and cooling are irretrievably tied to the floor space that is being delivered on is a really bad idea. When the use of that floor space, power, and cooling changes over time — and there’s a dozen dimensions that can cause it to change — those data centers are rigid and inflexible in their ability to react to those changes.”
Yes, cloud application architectures have bifurcated the market for data center facilities. But the phenomenon arising from this alteration is essentially a single trend. Leonard believes a facilities or colocation provider should engineer adaptability into its design in order to adapt what it offers customers to correspond with changes in their consumption profiles.
Like many data center providers, ViaWest is noticing a sharp uptick in what Leonard calls “Amazon graduates:” SaaS and IaaS customers who were either born in the cloud or migrated to the public cloud when it was cost-effective but found themselves moving back off once their consumption profiles evolved past that cost-effective point.
“They realize, especially as they end up with a lot of data on those clouds,” said Leonard, “that it becomes uneconomic at a certain scale. It becomes more economic to take that back and move it into a private cloud that is dedicated to them, or move it back onto their own hardware [with] co-location.”
These enterprises are spinning up applications through first-generation virtual machines, so they’re relying on environments such as VMware vSphere and OpenStack to provide layers of abstraction between their applications and the hardware hosting them. It’s these abstraction layers that separate the enterprise customers from service provider customers, said Leonard, who may be providing SaaS, PaaS, and IaaS platforms for their own customers in turn, and who may need more direct, hands-on tools for optimizing their resource consumption profiles in real-time.
However, in both cases, he explained, the variables that constitute both enterprises’ and service providers’ profiles are identifiable, manageable, and in a best-case scenario, adaptive.
“I don’t say that there’s a cloud data center,” ViaWest’s CDCO told us, “and you build a cloud data center in a particular way. There’s data centers that are able to adapt to changing needs — some driven by cloud users, some driven by SaaS or IaaS users, some driven by enterprises as they change over time. There’s characteristics that all these different users drive into the physical design of their data centers, that are more important to accommodate now than was the case five or ten years ago.”
Dave Leonard will explain in detail his firm’s methodology for providing adaptable data center facilities platforms, at 8:00 a.m. Central Time Thursday, September 15, in Room R209 at Data Center World, presented at the Morial Convention Center in Downtown New Orleans. He’ll also be moderating a panel session, “