How the Data Center Has Evolved to Support the Modern Cloud

There are many platforms, tools and solutions which help facilitate data center usability in conjunction with the cloud . Here's a look at some that outline just how far we’ve come with data center and cloud technology.

Data Center Knowledge

March 21, 2013

5 Min Read
How the Data Center Has Evolved to Support the Modern Cloud

There’s little argument among IT and data center professionals that over that past few years, there have been some serious technological movements in the industry. This doesn’t only mean data centers. More computers, devices, and the strong push behind IT consumerization have forced many professionals to rethink their designs and optimize to this evolving environment.

When cloud computing came to the forefront of the technological discussion, data center operators quickly realized that they would have to adapt or be replaced by some other provider who is more agile.

The changes have come in all forms, both in the data center itself and how data flows outside of its walls. The bottom line is this: If cloud computing has a home, without a doubt, it’s within the data center.

There are several technologies that have helped not only with data center growth, but with the expansion of the cloud environment. Although there are many platforms, tools and solutions which help facilitate data center usability in conjunction with the cloud – the ones below outline just how far we’ve come from a technological perspective.

  • High-density computing. Switches, servers, storage devices, and racks are all now being designed to reduce the hardware footprint while still supporting more users. Let’s put this in perspective. A single Cisco UCS Chassis is capable of 160Gbps. From there, a single B200M3 blade can hold two Xeon 8-core processors (16 processing cores) and 768GB of RAM. Each blade can also support 2TB of storage and up to 32GB for flash memory. Now, if you place 8 of these blades into a single UCS Chassis, you can have 128 processing cores, over 6TB of RAM, and 16TB of storage. This means a lot of users, a lot of workload and plenty of room for expansion. This holds true for logical storage segmentation and better usage around other computing devices.

  • Data center efficiency. To help support larger amounts of users and a greater cloud environment, data center environments had to restructure some of their efficiency practices. Whether through a better analysis of their cooling capacity factors (CCF) or a better understanding around power utilization, modern technologies are allowing the data center to operate more optimally. Remember, with high-density computing we are potentially reducing the amount of hardware, but the hardware replacing older machines may require more cooling and energy. Data centers are now focusing on lowering their PUE and are looking for ways to cool and power their environments more optimally. As cloud continues to grow, there will be more emphasis on placing larger workloads with the data center environment.

  • Virtualization. Virtualization has helped reduce the amount of hardware within a data center. However, we’re not just discussing server virtualization any longer. New types of technologies have taken efficiency and data center distribution to a whole new level. Aside from server virtualization, IT professionals are now working with: Storage virtualization, user virtualization (hardware abstraction), network virtualization, storage virtualization, and security virtualization. All of these technologies strive to lessen the administrative burden while increasing efficiency, resiliency and improving business continuity.

More appliances can be placed at various points within the data center to help control data flow and further secure an environment.

  • WAN technologies. The Wide Area Network has helped the data center evolve in the sense that it brings facilities “close together.” Fewer hops and more connections are becoming available to enterprise data center environments where administrators are able to leverage new types of solutions to create an even more agile infrastructure. Having the capability to dedicate massive amounts of private bandwidth between regional data centers has proven to be a huge factor. Data center resiliency, recovery and manageability have become a little bit easier because of these new types of WAN services. Furthermore, site-to-site replication of data and massive systems is now happening at a much faster pace. Even now, big data has new types of developments to help large data center quantify and effectively distribute enormous data sets. Projects like the Hadoop Distributed File System (HDFS) are helping data center realize that open-source technologies are powerful engines for data distribution and management.

  • Distributed data center management. This is, arguably, one of the biggest pieces of evidence in how well the data center has evolved to help support the modern cloud. Original data center infrastructure management (DCIM) solutions usually focused on singular data centers without too much visibility into other sites. Now, DCIM has evolved to help support a truly global data center environment. In fact, new terms are being used to help describe this new type of data center platform. Some have called it “data center virtualization” or the abstraction of the hardware layer within the data center itself. This means managing and fully optimizing processes running within the data center and then replicating it to other sites. In some other cases, a new type of management solution is starting to take form: The Data Center Operating System. The goal is to create a global computing and data center cluster which is capable of providing business intelligence, real-time visibility and control of the data center environment from a single pane of glass.

The conversation has shifted from central data points to a truly distributed data center world. Now, our information is heavily replicated over the WAN and stored in numerous different data center points. Remember, much of this technology is still new, being developed, and is only now beginning to have some standardization. This means that best practices and thorough planning should never be avoided. Even large organizations sometimes find themselves in cloud conundrums. For example, all those that experienced the recent Microsoft Azure or Amazon AWS outages are definitely thinking of how to make their environment more resilient.

The use of the Internet as well as various types of WAN services is only going to continue to grow. Now, there are even cloud API models which are striving to unify cloud environments and allow for improved cloud communication. More devices are requesting access to the cloud and some of these are no longer just your common tablet or smartphone. Soon, homes, entire business, cars, and other daily-use objects will be communicating with the cloud. All of this information has to be stored, processed and controlled. This is where the data center steps in and continues to help the cloud grow.

To see other cloud computing news, visit our Cloud Computing Channel.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like