Twitter’s Expansion Brings Capacity, Controversy

3 comments


Data center expansions and migrations are often complex undertakings. The recent migration for Twitter has proven to be more complicated than most, as the microblogging service wound up adding an extra data center location, causing confusion about the reasons for the change.

Back in December we reported that Twitter had leased data center space in Sacramento, a surprise move that reflected a shift from its previously announced plans to operate a new facility in Salt Lake City. So what happened to the Salt Lake City project? The parties aren’t saying. But it appears that Twitter continues to lease space in Salt Lake City, even as it focuses much of its expansion energy in other places.

Reuters Reports Cites Transition Troubles
Twitter’s Salt Lake City plans are back in the spotlight after a Reuters story reported that the expansion project had “spiraled out of control,” prompting the shift to the Sacramento site at RagingWire Enterprise. Citing unnamed sources, Reuters reported that the new facility built by C7 Data Centers in Bluffdale, Utah was “plagued with everything from leaky roofs to insufficient power capacity.”

Twitter isn’t offering details on its expansion. Citing customer confidentiality, C7 Data Centers won’t even confirm whether Twitter is a customer. But C7 disputes several of the Reuters story’s characterizations of its Bluffdale facility, and says it has experienced no major customer losses.

“We have not had any attrition of customers equaling more than 1 percent per annum, in the last 4 years,” said Wes Swenson, President of C7 Data Centers.

That suggests that despite its deployment in Sacramento, Twitter remains a customer at the C7 facility – and thus is paying for colocation space that apparently is lightly utilized. Reuters says Twitter signed a 4-year lease with a $24 million commitment for the Utah space. Colocation agreements typically provide limited escape clauses for violations of service level agreements (SLAs) such as extended downtime or thermal events, neither of which appears to be applicable in the case of the Bluffdale site.

Twitter: We’re Making Progress
We reached out to Twitter with questions about its data center expansion, but the company’s only response was a brief prepared statement from Michael Abbott, the company’s VP of engineering. “We’ve done more to upgrade our infrastructure in the last six months than we did in the previous 4.5 years,” said Abbott. “Twitter now has the team and infrastructure in place to capitalize on the tremendous interest in Twitter and continue our record growth.”

In a blog post announcing the completion of its migration, Abbott called the project “the most significant engineering challenge in the history of Twitter.” Abbott wrote that the migration involved testing and data replication between two data centers before migrating its production environment to a third, larger data center, which Abbott described as “our final nesting ground.”

Twitter has been managing its infrastructure through a managed hosting agreement with NTT America, which operates data centers in Silicon Valley and Ashburn, Virginia. Under that arrangement, NTT manages the servers, while Twitter deploys the applications. In its expansion, Twitter was moving into rack-ready colocation space in which it would own and deploy the hardware as well as the applications. This is somewhat different from migrations by other marquee names like Google, Yahoo and Facebook, who managed their own hardware in colocation space and then migrated into data centers they built themselves.

Last July Twitter announced plans to move its infrastructure to a  “new, custom-built data center in the Salt Lake City area.” Subsequent reports identified C7 Data Centers as its new provider.

C7 : We’ve Got Connectivity, Power
C7 Data Centers disputes a section of the Reuters report that the Bluffdale facility “lacked key features such as a second fiber network connection, and less than half of the electricity was actually available.”

Swenson says that XO, Qwest and Integra all have connectivity available at the new data center. “There are three 10G connections with each having dual paths into the building,” he said. “More carriers are on the way. This site is the single largest retail multi-tenant colocation site in the state of Utah, and it pays to be a carrier at such sites. C7 allows clients to connect into our redundant multi-carrier and bandwidth network, or they may choose to connect directly with the carrier.”

The site also has 5 megawatts of power capacity, Swenson says, supplied by two 2.5 megawatt transformers and backed by a pair of 2.5 megawatt generators. The Bluffdale site features 12,000 feet of equipment space, and offers high-density hosting  supported by free cooling, a cold aisle containment system and conputer room air conditioners (CRAC’s) with variable frequency drives.

Roof Leaks Seen During Construction
Swenson acknowledges that the facility experienced some roof leaks during its construction phase, when C7 was retrofitting an existing building. This was problematic because customer equipment was present during the construction.

“We are talking about a couple of drips, not a constant leak,” said Swenson. “We quickly had these patched by a professional roofing company while we continued construction. We did have a customer that had located equipment in the data center while it was being worked on, but the small drips did not hit the equipment, nothing was affected, and nothing had to be moved.”

How did the leaks happen? “To maximize ground space for a free cooling system and future generators, it was necessary to place the chilling system on the roof,” Swenson said. “To do so, we had to place a rather robust mezzanine system on the roof. When the mezzanine was installed, welders did not place the correct protection down, and there was some small solder splatter that penetrated the roof with a few very minute pinholes. During construction we had an occasional rainstorm, that would expose very small pinholes.”

Swenson says C7 replaced the roof with a new one, which has had no issues since the data center entered production in December, despite more than 300 inches of snow during the winter. “We have no outages since being commissioned and in production,” he said. “We continue to take orders, and there are many customers in the facility already.”

Will Twitter eventually shift more of its operations to the Salt Lake City facility? None of the parties will say. But Twitter continues to experience strong growth in traffic. The company said yesterday that it is now handling 155 million tweets per day, up from 55 million at this time last year.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

3 Comments

  1. Karl

    Only 3 10G connections for a the whole facility? To me, that does sound like a connectivity issues, at least for someone at the scale of Twitter. Their own site (http://www.c7dc.com/facilities/bluffdale-utah.htm) says 65,000 sq. ft., 8MW, etc. And here it says 12,000 sq. ft. and with 5MW delivered to the facility, though much less than that concurrently usable for IT load with their 2N setup. To me, the "half of the electricity was actually available" claim seems to be correct, just going on public information at this point.

  2. @Karl you missed some additional "public information" on the C7 website: http://www.c7dc.com/news/80/89/C7-Data-Centers-Expansion.htm The PR piece states that the 12,000 sq. ft. is the first phase of the 65,000 sq. ft. building.