Skip navigation
Geodesic Dome Makes Perfect Data Center Shell in Oregon
Oregon Health and Science University’s dome-shaped data center in Beaverton, Oregon, can support up to 25 kW per rack without a single chiller or CRAH unit. (Photo: OHSU)

Geodesic Dome Makes Perfect Data Center Shell in Oregon

Designer of new OHSU data center finds half-sphere structure optimal for cooling, space efficiency and seismic stability.

Used to build everything from a planetarium in post-WWI Germany to mobile yoga studios at outdoor festivals today, the geodesic dome has proven to be a lasting concept for highly stable structures of any size. Structural stability is a valued goal in data center design, but the idea of building a data center shell using a spherical skeleton that consist of great circles intersecting to form a series of triangles – the most stable shape we know of – is novel.

That is the approach Perry Gliessman took in designing the recently completed Oregon Health and Science University data center. Gliessman, director of technology and advanced computing for OHSU’s IT Group, said structural integrity of a geodesic dome was only one of the considerations that figured in the decision. It was “driven by a number of requirements, not the least of which is airflow,” he said.

One of the new data center’s jobs is to house high-performance computing gear, which requires an average power density of 25 kW per rack. For comparison’s sake, consider that an average enterprise or colocation data center rack takes less than 5 kW, according to a recent Uptime Institute survey.

Needless to say, Gliessman did not have an average data center design problem on his hands. He needed to design something that would support extreme power densities, but he also wanted to have economy of space, while using as much free cooling as he could get, which meant maximizing outside-air intake and exhaust surface area. A dome structure, he realized, would tick all the boxes he needed to tick.

No chillers, no CRAHs, no problem

The data center he designed came online in July. The resulting $22-million facility has air-intake louvers almost all the way around the circumference. Gigantic fan walls suck outside air into what is essentially one big cold aisle, although it is really lots of aisles, rooms and corridors that are interconnected. Inside the dome, there are 10 IT pods. The pods are lined up in a radial array around a central core, which contains a network distribution hub sitting in its own pod. This placement ensures equal distance for air to travel through IT gear in every pod and equal and shortest distance to stretch cables over to the network hub.

Each pod’s server air intake side faces the space filled with cold air. The exhaust side is isolated from the space surrounding it but has no ceiling, allowing hot air to escape up, into a round plenum above. Once in the plenum, it can either escape through louvers in the cupola at the very top of the dome or get recirculated back into the building.

There are no air ducts, no chillers, no raised floors or computer-room air handlers. Cold air gets pushed through the servers partially by server fans and partially because of a slight pressure differential between the cold and hot aisles. It goes into the plenum because of the natural buoyancy of warm air.

When outside air temperature is too warm for free cooling, the data center’s adiabatic cooling system kicks in automatically to help out. Beaverton, Oregon (where the facility is located), experienced some 100 F days recently, and the evaporative-cooling system cycled for about 10 minutes at a time at 30-minute intervals, which was more than enough to keep supply-air temperature within ASHRAE’s current limits. Gleissman said he expects the adiabatic cooling system to kick in several weeks a year.

In the opposite situation, when outside air temperature is too cold, the system takes hot air from the plenum, mixes it with just enough cold air to bring it down to the necessary temperature and pushes it into the cold aisle.

The army of fans that pull outside air into the facility have variable frequency drives and adjust speed automatically, based on air pressure in the room. When server workload increases, server fans start spinning faster, sucking more air out of the cold aisle, causing a slight drop in pressure, which the fan walls along the circumference are programmed to compensate for. “That gives me a very responsive system, and it means that my fans are brought online only if they’re needed,” Gliessman said.

Legacy IT and HPC gear sharing space

That system can cool 3.8 megawatts of IT load, which is what the data center is designed to support at full capacity. There is space for additional pods and electrical gear. Each pod is 30 feet long and 4 feet deep. The pods have unusually tall racks – 52 rack units instead of the typical 42 rack units – and there is enough room to accommodate 166 racks.

Since OHSU does education and research while also providing healthcare services, the data center is mission-critical, supporting both HPC systems as well as hospital and university IT gear. Gleissman designed it to support a variety of equipment at various power densities. “I have a lot of legacy equipment,” he said. All infrastructure components in the facility are redundant, and the only thing that puts it below Uptime Institute’s Tier IV standard is lack of multiple electricity providers, he said.

It works in tandem with the university’s older data center in downtown Portland, and some mission-critical systems in the facility run in active-active configuration with systems in the second data center.

Challenging the concrete-box dogma

Because the design is so unusual, it took a lot of back-and-forth with vendors that supplied equipment for the project and contractors that built the facility. “Most people have embedded concepts about data center design and, like all of us folks, are fairly religious about those,” Gleissman said. Working with vendors was challenging, but Gleissman had done his homework (including CFD modeling) and had the numbers to convince people that his design would work.

He has been involved in two data center projects in the past, and his professional and educational background includes everything from electronics, IT and engineering to biophysics. He does not have extensive data center experience, but, as it often happens, to think outside the box, you have to actually be outside of it.

Take a virtual tour of OHSU’s “Data Dome” on the university’s website. They have also posted a time-lapse video of the data center’s construction, from start to finish.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish