Skip navigation

Q&A: Edward Henigin of Data Foundry

Edward Henigin is the Chief Technology Officer of Data Foundry, which today announced the opening of Texas 1, the company's 250,000 square foot data center in Austin, Texas. Data Center Knowledge recently had an email question-and-answer session with Ed about the new facility and some of its features.

The chilled water system at the new Data Foundry Texas 1 data center in Austin, which supports a variety of approaches to cooling.

Edward Henigin is the Chief Technology Officer of Data Foundry, which today announced the opening of Texas 1, the company's 250,000 square foot data center in Austin, Texas. Data Center Knowledge recently had an email question-and-answer session with Ed about the new facility and some of its features.

DCK:  The Data Foundry team has toured data center  facilities all over the world. What design principles proved to be most useful in designing and building Texas 1?

Henigin:  One important thing we learned was that retrofitting an existing building into a data center requires compromising your ideals. We saw too many "hermit crab" data centers with confusing navigation, odd-shaped rooms, inefficient space utilization, equipment that would be difficult to maintain, and overall poor experiences for customers. We decided early on that in order to achieve our vision of a premium, customer-satisfying data center, we would have to build the building from scratch, with function dictating the form, from beginning to end. By choosing a large "green field" site, we were able to execute on our three main design strategies:

1) Redundant everything. We were surprised by how many big name colocation facilities lacked redundant equipment at various levels of the electrical or mechanical systems. We started with power feeds from two independent substations, and continued with redundant transformers, redundant switchgear, redundant generators, redundant UPSs, all the way down to the co-location floor. All of our redundant equipment can be individually taken out of service, maintained, even swapped out if necessary, while customer load is never exposed to raw utility power. We brought in redundant water and telecom feeds. Some sites consider feeds to be diverse if they are 25 feet apart. Our feeds are 2500 feet apart.

2) Value the human experience. We put ourselves in our customer's shoes, and wondered: what would we want out of a data center to help us be comfortable and productive? We started with the loading dock up front, because our clients ship and receive a whole lot of equipment and components. We made the navigation as clear and simple as possible so our clients wouldn't get lost. We added break rooms, showers, and convenient laptop workstations outside of the cold and noisy data hall. Wi-fi blankets the building. Our NOC is staffed 24x7 on site in case anyone needs a patch cable, or a hand with racking a server. Then we added office space that clients could lease for permanent on-site staffing to support their deployments. But customers aren't the only humans living in the building. We also included features to make our employees and contractors happier and more productive. We designed the security, mechanical and electrical infrastructure for ease of operation and maintenance. The loading dock and front door are monitored by the same security booth - allowing us to not split security between the front and back of the building. Our generators are housed under the roof in individual concrete-encased rooms. The mechanical rooms, electrical rooms and service galleries include extra space to provide an easy working environment for facilities and maintenance personnel.

3) Energy efficient is not optional. The environment demands it, the market demands it, and our board demands it. From motion-sensing lights all the way up to the highest efficiency substation transformers ever built, we selected components and designed the operations to maximize energy efficiency. All pumps and fans are equipped with VFDs, and all sequences are optimized to take maximum advantage of them. We are taking advantage of two different "eco" modes on our UPS modules. The data halls are designed to inherently sequester the hot air, maximizing CRAH efficiency. A PLC-based controls system provides the horsepower required to optimize the biggest potential energy sink, a poorly-run mechanical plant. Altogether, our energy efficiency measures add up to excellent returns to all of our stakeholders.

DCK: At Texas 1 you offer a wide range of cooling options. Does this require any special approach to how and where you will deploy customers who may choose different approaches?

Henigin: We designed our chilled water cooling system to accommodate the various approaches utilized for today’s HPC environments. We work with customers to understand the equipment they need to install on the colocation floor and implement the type of cooling solution that best meets their needs and/or expectations.

DCK: Texas is an active and competitive state for data center services. What do you see as the key differentiators for Austin - as opposed to Dallas, San Antonio or Houston?

Henigin: Austin is located south of Tornado Alley, north of Hurricane Alley, and is on earthquake-free geology. We have low power costs and access to a deep pool of technology workers. World class companies in Austin now have a world-class data center option right in their back yard.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish