Skip navigation
Data Center Designs for Evolving Hardware

Data Center Designs for Evolving Hardware

Current designs for traditional enterprise type data centers aren’t necessarily flexible enough for the myriad of newer devices coming their way. IT hardware is beginning to morph into different form factors, which may involve non-standard physical configurations, as well as unconventional cooling and power schemes. This does not necessarily mean that a traditional design will not work in the near future, however the long term IT systems planning must be evaluated to understand the potential impact on the physical issues in the data center facility.

This is the second article in a series on DCK Executive Guide to Data Center Designs.

Current designs for traditional enterprise type data centers aren’t necessarily flexible enough for the myriad of newer devices coming their way. IT hardware is beginning to morph into different form factors, which may involve non-standard physical configurations, as well as unconventional cooling and power schemes. This does not necessarily mean that a traditional design will not work in the near future, however the long term IT systems planning must be evaluated to understand the potential impact on the physical issues in the data center facility. Just as the widespread use of Bladeserver technology and virtualization had a radical impact on the cooling systems of older data centers; other hardware and software developments may also begin to influence the physical design requirements and should not be overlooked.

Server Architecture
Unique business models can also have an impact on the IT systems and therefore should be considered when designing a new data center. For example, while the X86 architecture has been (and still is) the dominant general purpose processor platform for over the last two decades, major IT manufacturers have launched a new generation of highly scalable servers that utilize low power processors that were originally designed for smartphones and tablets. One major vendor just released their modular server system that claims it can pack over 2,000 low power processors in a single rack, and that it is capable of delivering the same overall performance as 8 racks of their own X86 processor based servers, for certain types of hyper-scale tasks such as web-server farms. Of course, this architecture may not be in your IT roadmap today, however it may need to be considered as a possibility in the foreseeable future and its potential impact should not be ignored.

The IT equipment landscape is also changing and manufacturers’ product lines are becoming more encompassing and fluid. Major competing vendors are crossing traditional boundaries and the lines of separation of Server, Storage and Network are becoming blended and blurred. This can potentially impact the layout and location of equipment (rather than the previous island style layouts) impacting the interconnecting backbone structured cabling (migrating from copper to fiber, to meet bandwidth demands). This needs to be considered and discussed by the facility and IT design teams.

IT hardware physical forms are changing as well. In an effort to become more energy efficient while delivering ever higher computing performance at greater densities, even liquid based cooling is becoming a mainstream possibility. As an example, while we have previously discussed broader operating temperatures and the greater use of “free cooling” in the most recent version of the ASHRAE TC 9.9 Expanded Thermal Guidelines (see part 3 - Energy Efficiency), it also contained a set of standards for water cooled IT equipment, defined as classes W1-W5.

These water based standards outline “cooling” systems that can harvest the waste heat from IT equipment and deliver hot water to be used to heat buildings. The Green Grid has also addressed this with the Energy Reuse Factor (ERF), which is a metric that identifies the portion of energy that is exported for reuse outside of the data center. This type of water cooled IT hardware may not be mainstream reality for every operation, but the mere fact that it was incorporated into the most recent ASHRAE guidelines and addressed by The Green Grid, makes it a foreseeable scenario that is within the realm of possible options for hyper-scale or high performance computing, but may eventually become more widespread in future mainstream data centers.

Moreover, there is a trend toward open source hardware (such as Open Compute), similar on nature to open source software. One need to simply look at the success of Linux, which originally was a developed as open source “freeware” alternative to UNIX (which at the time was the “Gold Standard” for enterprise class organizations). Now Linux is considered a reliable mainstream operating system for mission critical applications. While Open Compute has publicly available hardware designs which can then be used as a basis for a blueprint for open source computer hardware, (see part 5 - Custom Data Centers).

Storage Architecture
Storage demands have soared, in both the absolute total volume, as well as the speed to access the data and search through it. Concurrent with that demand, Solid State Drives (SSD) has come to the forefront as the preferred, but more expensive first level storage technology, due to its higher significantly read-write speeds, as well as its lower power use. Prices of SSD have come down significantly and will soon be become the more dominate form of first level storage, with slower spinning disks as the second level in storage hierarchy. Moreover, SSD is also able to operate over a much wider environmental envelope (32-140°F) than traditional spinning hard disks. This will lower data center cooling requirements and need to be considered as part of the long term strategy in the data center design.

Network Architecture
Although the design of the IT network fabric architecture is not directly part of designing the data center facility, the nature of its design and related structured cable and network equipment required by the IT end user of the data center facility must be taken into account, rather than arbitrarily assumed or surmised by the data center designer.

Data transmission demands and speeds have continued to increase astronomically. Over the last 20 years we have gone from 4/16 Mbs Token-Ring, to 10, 100, Megabit and 1 Gigabit Ethernet networks, and currently 10, 40 and 100 Gigabit networks are the state-of-the art for the datacenter “backbone”. Yet not long after we deploy the next generation of hardware with its increased performance, we always seem to be bandwidth constrained. Even now the Institute of Electrical and Electronics Engineers (IEEE) is already working on a 400 Gigabit standard with 1000 Gigabit not far behind. This affects the physical aspects of the size and shape of network equipment and impacts its port density and the size and type of network cabling (shifting from copper to fiber), as well as the cable support systems deployed around the data center. This not only impacts the amount of space, power and cooling, it also requires more flexibility, as networking standards and architectures evolve. In addition as was mentioned above some vendors are merging and converging IT product lines which can impact the traditional island style layouts of Servers, Storage and Networks, which in turn refines the cable paths.

One should consider that the significant changes that have occurred in the manner information is accessed, displayed and utilized by businesses and consumers on mobile devices such as tablets and smartphones. How do we architect a data center to meet technical changes of this ever increasing onslaught of end-user driven demand for ever more storage, requiring more computing performance and greater bandwidth requirements, which in turn impacts the IT equipment and therefore ultimately the data center?

When designing a new data center, perhaps one of the first questions to ask is who is the end user? A traditional enterprise organization will want a solid design that has a proven track record, most likely using standard racks and IT hardware from major manufacturers, but may still have its own unique set of custom requirements that they have developed (see part 5 Custom Data Centers). While a co-location facility will need to offer a more generic traditional design to meet a wide variety of clients. Moreover, in sharp contrast, a large scale Internet hosting or cloud services provider is more likely to have a radically different requirement and may use custom built servers housed in physically different custom racks (see part 5 - Custom data Centers). Even the need for the traditional raised floor has been called into question, and some new data centers have been built without, locating IT cabinets directly on slab.

The complete Data Center Knowledge Executive Guide on Data Center Design is available in PDF complements of Digital Realty. Click here to download.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish