Skip navigation
Inside Schneider's Big Bet on Tailored Yet Standard Data Center Blueprints
Schneider Electric’s stand at Data Center World Fall 2014 in Orlando

Inside Schneider's Big Bet on Tailored Yet Standard Data Center Blueprints

The data center is becoming more of a mass-producible machine, and Schneider wants to play a role in this progression.

Seeing an opportunity in the data center design field's gravitational pull toward standardization and new, emerging workload categories, Schneider Electric recently redefined its EcoStruxure brand to include tools for data center design rather than simply software for data center management. It is now a platform for patterning and templates for data center designs based on the class of workloads they’ll be hosting and customers they’ll be serving.

Simply put, the data center is becoming more of a mass-producible machine, and Schneider wants to play a role in this progression.

EcoStruxure is now a platform for developing enterprise data centers tailored to their respective industry domains, as Steven Carlini, Schneider's senior director for data center global solutions, put it in an interview with Data Center Knowledge. “You’re using the same architecture, but you’re doing separate instances that can be customized for each domain," he explained.

One of those domains is the Internet of Things.

“We’re breaking it out into three levels from a platform perspective,” Carlini continued. First, “we have our connected device level, which is all of the things in the data center you may be monitoring." (This is about IoT in the data center, not the broader understanding of IoT as connected appliances, cars, CCTV cameras, etc.) That includes individual outlets IoT gear plugs into, in-row cooling units, UPS units, chillers, heat exchangers, switchgear, and so on.

These components on the connected product level are coupled with what EcoStruxure calls the edge control level, which is the on-site software that’s traditionally been given the EcoStruxure brand. In an IoT use case this layer would contain the aggregated data polled from all the connected devices on the network.

“That information. . . could be ported up into cloud level,” he continued, “the top layer, which is our apps, analytics, and services layer.  You can run predictive analytics on all the equipment, look for trends, combine this data with data from other data centers.  The bigger your data pool, the more accurate your predictive analytics are going to be.”

At that level, operators can devise specific rule-based alerts based on data compiled from aggregated services. Say, for example, that you have a five-year-old battery that has been operating at a specific temperature, has crossed a threshold for number of discharges, and may need to be replaced within a three-month window or face a 90 percent failure potential. An operator could tie an automation rule to be triggered in that event and deploy that rule within a cloud-based service that combines the events from all facilities in a portfolio.

The OT Department

If this sounds like the IT model for managing virtual machines and application workloads applied to the realm of data center operations, it's not a coincidence.  Carlini calls this elevated level “OT” — the operations counterpart to IT. And if Schneider has its way, IT and OT may meld into the same “T” in enterprises everywhere.

“For data centers, it’s an integration of what we’re calling IT and OT.  With the EcoStruxure platform you can bring all the monitoring, management, and control of those systems — technical cooling, electrical room, facility level — together under one single platform.”

Organizationally speaking, there may continue to be role divisions between IT and OT personnel for the foreseeable future, he conceded.  Technically, however, there should be nothing artificial that prevents the monitoring of physical data center resources and virtual ones from being converged into one cloud service.  This way a single automation framework could be constructed that could manage the active deployment of workloads, such as applications and databases, based on physical conditions.

“We see data centers at Schneider as single entities, even though we understand that there’s different silos within data center operations that we have to deal with," he said. "Schneider is one of the few companies that does the whole IT room, from the outlet all the way up to the building entrance and the medium-voltage switchgear.”

Hyperscale Reconsidered

Schneider is not against the mindset that an enterprise’s collective data center facilities scattered throughout the planet are effectively a single machine that may be automated on a single platform.

In practice today, however, the world’s enterprise workloads are not taking on a singular profile.  Classic client/server applications still abound. Carlini invoked enterprise resource planning (ERP) as one example of a workload class that has not evolved much in the past few decades but to which so much of a business’ internal operations are still bound.  Analytics has been transforming database operations from a warehouse-driven mindset back into a data science operation, from a cultural perspective resembling more what computer science looked like in the 1970s than the 1990s, although significantly faster and more efficient.

Then there is the new class of containerized, hyperscale, microservices-oriented workloads, often developed by a new class of software engineers using versatile cloud-native languages like Go (or Golang), Clojure, and Scala (based on Java). Meanwhile, there’s still Web applications, the newest of which are composed with another new class of languages including Node.js (based on JavaScript), Python, and Ruby.

You may think none of this would matter all that much to a data center operations manager — someone in Carlini’s OT department. However, there are resource consumption profiles emerging for all these application classes. They utilize infrastructure in dissimilar ways.

Now, a typical cloud data center operator might assume that these resource profiles all wash out in the end if their respective workloads cohabit on a multi-tenant server cluster whose infrastructure is adaptable to wildly varying conditions. But what if that’s not the best idea? What if in the end data centers should be tailored to specific classes of workloads?

In other words, what if instead of a grand, unified cloud, a vast stretch of hyperconverged pipelines and a colossal data lake, each class of workload can best be addressed by a data center whose every component — from the make and model of server to the assembly of the switchgear — is custom-tailored to fit?

“That’s exactly what [EcoStruxure] is designed to do,” responded Carlini.  “So you may have some of your internet-giant data centers running these bare metal Open Compute-style servers and storage.  And on those, you may want to monitor utilization or temperature, because those data centers run a lot hotter than legacy-type data centers.  There’s different types of controls you would use for those, as opposed to an edge data center that’s deploying maybe a few hyperconverged systems.  Those may be in a closed box, so you may want to monitor any kind of door alarm or humidity alarm or water sensor.

“You may be taking a completely different approach,” he continued, “but using the same architecture.”

Automating Virtual and Physical Together

The hyperscale, microservices-oriented model that has been pioneered by Google, Facebook, and Netflix represents only a tiny fraction, quantitatively, of the world’s data centers.  That won’t be the case for too long — there are good arguments that it cannot be.  But Carlini noted that individual racks in these hyperscale models have much more variable temperatures, even when they’re seated right next to one another.

So with the same adept responsiveness an IT manager using a load balancer like NGINX or an orchestrator like Mesosphere can re-situate workloads across server nodes, an OT manager in the EcoStruxure realm could re-partition cooling levels among the racks, optimizing them as necessary for varying levels of heat output.

“Hyperconverged is going to be the next game-changing technology,” said Carlini.  “Once they write the software applications to be ported from a more traditional [realm] to hyperconverged systems, you’re going to start seeing more of those as the standard deployment.  But you’re still going to have them running different applications at different times with different criticality levels, even though they’re standardized boxes.”

See also: Incumbents are Nervous about Hyperconverged Infrastructure, and They Should Be

New Workloads Drive Compute to the Edge

Server makers such as Hewlett Packard Enterprise -- and many others -- have argued that IoT and the increasing volume of multimedia are conspiring to bring compute power out of centralized facilities and more toward the edge. But there are two edges one has to think about: one for the data center, the other for the internet and the broader network. Schneider foresees a concept of “edge” where data center units — greater in number, smaller in size — are drawn toward the point in the network that connects customers to resources.

“We’re seeing a trend of closer-to-the-edge data centers in more urban areas and more mid-sized data centers — 1 to 2 MW — because of the [low] latency that’s required for a lot of these applications. The data that’s being transmitted doesn’t have to be stored forever; only the fresh data, the critical data, needs to be stored.”

From the opposite side of the scale, the shift in network resources to the edge of the network is being driven by the surge of IoT applications, Carlini said.  Here, higher-bandwidth content and faster compute cycles need to be delivered closer to the user.

The product is an emerging concept of an edge data center.  If this concept catches on, at the very least the colocation industry faces the prospects of tremendous competition from a new source: custom-made, rapidly deployed, remotely managed server centers.  If such a new industry does take shape, manageability will become a key criterion in determining its viability.

Which may be why Schneider is getting a jump on this evolutionary shift now.

TAGS: Design
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish