Google's Speedy Site Location Process

Add Your Comments

Nick Carr points to an interesting item in the Tulsa Free Press regarding Google’s data center site location process:

When a major Internet server wanted to find a site for a server farm how did they do it? Google googled its way to a major Pryor engagement. As Lloyd Taylor, director of global operations for Google, tells it, the company knew it needed a lot of land, a lot of electricity and a lot of water. They wanted the facility to be more or less in the middle of the country. How did they find Pryor? They went on the Internet.

The story also includes an account from Sanders Mitchell, who runs the MidAmerica Industrial Park in Pryor, the site of a $600 million Google data center project, who said he wasn’t certain who he was dealing with until the papers were signed. Google’s cloak-and-dagger approach to data center secrecy is pretty well established. But one of Mitchell’s other comments was very revealing, and may have implications for any further buildout of Google’s data center network:

“They haven’t really discussed this, but I think one of the things that made us attractive to Google was that we were ready to move on the spur of the moment. We had an 86,000 square foot building already in place, which we had built on speculation. … In other locations so many agencies have to sign off on a project that it can be a year or more before any real movement can be made. I don’t think the team at Google was willing to wait that long.”

Google’s sequence of data center announcements over the past year reinforces the impression of a company in a hurry. What’s driving this need to scale so large, so fast? The combination of secrecy and speed is making some folks nervous about Google’s ambitions. But not in Oklahoma, where Google’s Taylor received a medal from the governor.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.