Skip navigation
Getting to the True Data Center Cost
(Photo by Michael Bocchieri/Getty Images)

Getting to the True Data Center Cost

Startups emerge to untangle infrastructure costs for business execs

Will it be cheaper to run a particular application in the cloud than keeping it in the corporate data center? Would a colo be cheaper? Which servers in the data center are running at low utilization? Are there servers that have been forgotten about by the data center manager? Does it make sense to replace old servers with new ones? If it does, which ones would be best for my specific applications?

Those are examples of the essential questions every data center manager should be asking themselves and their team every day if they aren’t already. Together, they can be distilled down to a single ever-relevant question: How much does it cost to run an application?

Answering it is incredibly complex, which is the reason startups like TSO Logic, Romonet, or Coolan, among others, have sprung up in recent years. If you answer it correctly, the pay-off can be substantial, because almost all data centers are not running as efficiently as they can, and there’s always room for optimization and savings.

They aren’t data center infrastructure management products, or DCIM, and they aren’t IT service management. These companies, while different from each other, have one thing in common: they focus squarely on cost, trying to reduce the long list of factors and their interrelationships that determine data center cost to numbers that make sense not only to data center managers but also to business executives.

“Really, what we’re selling is savings,” Aaron Rallo, CEO and founder of Vancouver-based TSO Logic, said. Put simply, TSO’s software analyzes as much operational data from a data center as it can get to and finds out how much it is costing a company to run a certain application on its current infrastructure, and how much it could cost if they would run it with something else underneath.

“If we find that you have 1,000 VMs that are doing nothing at all, before you go out and buy more licenses, buy more hosts, buy more compute, let’s repurpose the ones that you have,” Rallo said.

It is TSO that put out the well-publicized report together with Stanford research fellow Jonathan Koomey that estimated that about 30 percent of servers deployed worldwide weren’t delivering any computing services.

While looking at physical and virtual infrastructure day in and day out, TSO routinely sees that about one third of servers deployed is completely forgotten about, Rallo said. The report estimated that these idle servers together represent about $30 billion in sunk investment.

Being Useful to Financial Stakeholders

Reflecting the change in the role the data center plays today within the enterprise – being looked at as a strategic asset that helps generate revenue, rather than a necessary cost of doing business – TSO’s user within a customer organization is rarely the data center manager. Most often, it’s someone who is a financial stakeholder: a CIO, someone who oversees applications or application provisioning, and sometimes even a CFO, Rallo said.

Being useful to people like that is TSO’s key strength, Jeff Klaus, GM of data center solutions at Intel, a TSO partner, said. The platform presents information in a “reportable format, so it can go beyond the guy who’s running the data center,” he said. “It can go to the COO or the finance individual in a way that he doesn’t have to massage or get an interpretation from the IT guy or the facilities person.”

TSO can collect data from a DCIM software solution underneath, tap directly into CPU power, temperature, and utilization metrics through Intel’s Data Center Manager middleware, analyze data from the virtualization platform, and know who within the organization a VM or an application belongs to using data from ITSM software. The wider the variety of data the platform ingests, the more useful its output will be.

“We start with the applications and work our way down to the physical,” Rallo said. “It’s exceptionally important to have a holistic view of the data center; not just a physical view, and not just an application view.”

Reaching Into the Cloud

The company recently struck an agreement with Amazon Web Services so that its platform can add data from one more essential layer of modern enterprise infrastructure to the mix: the public cloud. AWS will be its first cloud integration, but the plan is to add Microsoft Azure, the second-biggest public cloud, and then also smaller more specialized cloud providers.

The data set a platform like TSO can draw from AWS is “pretty extensive,” Rallo said. The cloud has numerous third-party APIs that expose data around utilization levels and essentially everything else TSO gets from an on-premise virtualization stack.

Not a Replacement for DCIM

Software by TSO and the others doesn’t necessarily replace DCIM or ITSM software. As described, TSO uses both to enrich the data set it analyzes. The company isn’t interested in developing the capabilities to monitor power consumption of cooling units or humidity levels on the data center floor, for example.

The DCIM integration opportunity is important, however, and it’s important to the DCIM vendors themselves. TSO has partnerships with Siemens around DCIM and others. “They want to tightly couple the physical to the application,” Rallo said.

Most DCIM solutions strive for a holistic view of the data center, and no matter how complete your view of the physical infrastructure underneath is, it can never be called a “holistic view,” if you don’t have visibility into the software that infrastructure is supporting.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish