Jason Waxman, Intel VP and general manager of the chipmaker’s Cloud Platforms Group, has been on the board of the Open Compute Project since its inception in 2011. He has watched OCP grow from an open source data center project only Facebook and a handful of Asian design manufacturers were actively part of into a vibrant ecosystem whose members include Microsoft, Apple, Google, and some of the biggest IT and data center infrastructure vendors, such as IBM, Dell, HPE, Cisco, and Schneider Electric, among many other companies.
While the list of the non-profit’s members and sponsors has grown over the past five years, however, the variety of OCP hardware users hasn’t expanded nearly as quickly. In addition to Facebook, Microsoft, and Rackspace, OCP-style gear has enjoyed some adoption by a handful of big financial services firms, and, more recently, interest from some of the big telcos. But the pool of buyers is still limited to big users with massive data centers and very deep pockets. There’s little evidence of adoption by smaller enterprise IT shops.
In a recent interview, we asked Waxman about his thoughts about the reasons OCP hardware hasn’t seen a lot of adoption by those smaller IT organizations and what has to happen for it to pick up:
Data Center Knowledge: Is there a compelling story in Open Compute for traditional enterprise IT shops?
Jason Waxman: There’s the business need, and then there’s the barrier to entry. If the barrier to entry is high, it’s very difficult for smaller companies to go invest in that infrastructure.
It comes down to what they do and how core the IT infrastructure is to their business. Companies that are doing engineering services, companies in healthcare, companies that are their own SaaS provider – there are a lot of [companies] that aren’t huge, but owning their infrastructure is required.
One company, a medium-size business, they do SaaS. One of the reasons they need their own infrastructure is control. They have to meet certain compliance requirements for their customers that maybe they can’t get in a bigger general-purpose cloud. They may need to decide, “Hey look, if there’s a security patch, I don’t want somebody else telling me that I have to have mandated downtime; I want to be able to make that decision on my own.”
And then there’ just the economics of it. I can go and find a lot of general-purpose things, and there’s some benefits to that, but if I really need to tune my infrastructure to what I do, than having control of the hardware can be more efficient. So there are definitely a lot of reasons, and we see people moving both ways.
We see companies that are going full-scale into the cloud [like Netflix]. And I see companies going the other direction as well, saying I’ve got a big-enough scale now and I need to manage my own infrastructure, and it will be lower-cost and more efficient for me in the long run to have something that really suits my needs.
DCK: What has to happen before smaller enterprises start deploying Open Compute or something similar?
Couple of things have to happen. The hardware building blocks for compute at scale need to be available and they need to be efficient. Right now, the divide between the standard off-the-shelf system and what you can get for example through Open Compute or what large cloud services are deploying is just huge. If through Open Compute there’s greater access to more efficient solutions, then that brings down the overall cost to deploy.
DCK: Is it still difficult to source OCP gear?
It is. Even within Open Compute, we’ve had a lot of fragmentation. Some of the solutions have been optimized for the way Facebook does it, or the way Microsoft does it, or the way that Rackspace does it, so there are these variants, and that’s good because it’s highly optimized for their solutions, but to get an ecosystem going, you need more standardization of those building blocks. Otherwise, companies that want to participate in this ecosystem go, “Well, if I design something, how many other customers am I going to get?”
So you have this barrier for other vendors wanting to participate in the ecosystem, and when the vendors aren’t participating in the ecosystem, you’ve got fewer choices. Then you go back to the end user and the end user says, “Well, I don’t see any choice.”
I think the way you break the cycle is by driving more efficient building blocks. More standardization of the building blocks that allow more companies to participate in that ecosystem. Then you’ve got more places where you can buy around the world, you’ve got support, it’s easier for vendors to justify investment.
DCK: But major OEMs have been on board with OCP and have had solutions based on the designs on the market. Wouldn’t that be an example of the vicious cycle being broken?
When you peel the onion a little bit – and I think it’s well-intentioned – many of the products have been sort of derivatives of standard products under a kind of Open Compute-inspired umbrella. But it lacks some of the consistency. So I’m still making a variety of choices: A or B or C or D, at the end of the day, and each one of them has trade-offs, versus saying, “I know that I want this, and now I can find different sources or multiple sources to buy that type of product.” That may seem like a subtle difference, but I think it’s a crucial one to really getting the Open Compute ecosystem going.
That said, the number of systems being deployed through Open Compute has been growing. I think we’re starting to see that turn. Now we’re starting to really see it.