IDS Readies Data Centers on Ships

In early 2008, startup International Data Security revealed plans to build a fleet of data centers on cargo ships docked at ports in the San Francisco Bay. After an initial flurry of publicity, the company receded from the spotlight amid industry chatter of funding challenges.

Now IDS is back, and the company says it has lined up funding and an anchor tenant for a proof-of-concept “maritime data center” that will dock at Redwood City, Calif. The first vessel is a former training ship for the California Maritime Academy that IDS has acquired and is prepping for renovation. IDS representatives say the company has lined up $15 million for an initial deployment of 500 racks of servers.

“Coming out with a new business with major capital needs in 2008 was difficult,” said Barry Prince, who is leading the sales and marketing effort for IDS. “We are finally in position to start getting things running.”

Ship-Board Cooling Offers Lower Costs
Using cargo ships allows for flexibility and the ability to expand based on the availability of ships and port space, rather than real estate. IDS plans to develop the below-deck areas as data center space and use the water temperature to support its cooling system, which it cites as a key factor in its claims that it can build its ship-based data centers for less than similar land-based facilities.

IDS isn’t the only company to explore ocean-going data centers. In 2008 Google filed a patent for a “water-based data center” that uses the ocean to provide power and cooling. The announcement prompted considerable debate in the data center community, and Google hasn’t said whether it has attempted to build such a facility.

Concept Brings Curiosity, Skepticism
IDS said it has experienced the same mix of curiosity and skepticism. “A lot of the conversations we’ve had with data center operators have been around questions like ‘do you want to put this kind of equipment close to salt water’ and ‘is the rolling motion of the ship a problem,’” said Prince. “The reality is that the Navy has had data centers on war-fighting ships for 20 years or more.”

IDS President Richard Naughton is a former Navy admiral and superintendent of the Naval Academy, who also headed the Navy’s transportation command.

“There are ways of securing the equipment so that the motion is minimal,” said Prince. “The ship is a sealed, air conditioned facility that wouldn’t be any different from a building located two blocks from the ocean in San Francisco. Those problems are existing problems that are built into the solution.”

Will Dock at Redwood City
The first IDS DATAship is currently docked at Mare Island in the San Francisco Bay, waiting to complete an insurance inspection. It will then be moved to a shipyard in Alameda, where it will be retrofitted for data center use, and then to its new home in Redwood City. The ship will have access to 2 megawatts of power from Pacific Gas & Electric, with an option to later upgrade to a 15 megawatt feed.

The first DATAship has the capacity for up to 1,500 racks of servers, but will stick to 500 racks for the proof-of-concept installation. Prince said the customer wants to stay anonymous for the moment, but will go public if the data center is successful.

“The game plan is to test the systems and results to verify our claims,” he said. “Once that is proven out, the client is prepared to install up to 3,000 racks on a second ship.”

Options on Additional Ships
IDS has options to purchase up to six additional ships, which will be newer, larger container cargo ships that can each house more than 200,000 square feet of data center space, Prince said, enough for about 4,000 racks of equipment.

The second ship would be moored in Oakland, where PG&E can supply up to 50 megawatts of power to the ship. While the second vessel is being deployed in Oakland, IDS plans to retool the first ship for its full 1,500-rack capacity, while upgrading the power feed at the Redwood City dock.

Prince said CEO Ken Choi has lined up funding to support the proof of concept, with additional funding available if the company gets customer commitments. “We’re very excited about getting this going,” said Prince. “We have a lot of customer interest.”

IDS is also working with service providers interested in their concept. One of them is Silverback Migration Solutions, which has featured IDS plans for maritime data center plans in posts on its company blog by Silverback CEO Ken Jamaca. Silverback is “involved with IDS on a few different fronts,” Jamaca writes.

Conventional Space, Unconventional Cooling
The DATAships will feature conventional data center space rather than containers (although future vessels will have the ability to house containers on-deck). IDS plans to create data center space in the cargo holds, installing subfloors to house the data halls.

The company’s design features overhead cooling with a solid floor, rather than a raised-floor. IDS will isolate the cold and hot air within the data center using a hot-aisle containment system from APC. The coling system will use an IDS innovation: using the ship’s double-hull to create a heat exchange. The space between the hulls, which is often used to store fuel or ballast water, will support a system in which cool salt water will provide chilling for a closed fresh water system.

Prince said the IDS DATAships offer a unique solution for disaster recovery. The leading disaster threats on the West Coast, he noted, are earthquakes, floods and fires. “A ship is nearly impervious to all three threats,” he said.

But Prince acknowledges that skepticism will remain until the company’s data centers are deployed and successful. “The main reservation we hear is ‘it’s never been done – prove it,’” he said. “We intend to, and have secured a major corporation that was willing to work with us to do this.”


Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. These guys are completely nuts. There's absolutely no way that putting a DC on a boat will make it less prone to disasters. What about flooding/storms? More importantly, they're going to require a tether to supply electricity and a pipe to the ship, I can see horrible things happening a tether quite easily.

  2. Skeptic

    If I remember, local and state governments don't like the concept from a revenue standpoint. They can tax/regulate/harass brick and mortar locations, but they whine like greedy babies when a mobile high revenue facility just disappears because of a changing business climate. Tough luck, cities. Mebbe spend less, and stop regulating everything that moves, and you might keep businesses.

  3. Shane

    Data Centers in the tub? On a humorous note..... Perchance a torpedo strike? How does one back up then?

  4. drew

    I'm a cadet at the Cal Maritime academy. I think it's great IDS plans to use our old training ship. Putting a data center on a ship is a really interesting idea. I imagine there will be many technicalities about maintaining the ship, as it is a floating platform. I bet there will have to be new laws and rules written to accommodate this idea. Technology and salt water certainly don't mix - but ships are huge! With the proper AC plant installed and running, it's simply a floating warehouse with dry cool air throughout. Remember this data center ship isn't going to sea, it's just meant to float at a dock.

  5. Rocky

    We ran PCs, servers, tape drives, switches, and routers in non-conditioned space at the Port of Redwood City for 20+ years. Based on our experience, the biggest problems were not heat, or humidity, or salt air. The biggest problem was gypsum dust, from a 100+ foot high pile a few hundred yards away. That dust got into everything. One router RAM upgrade was delayed while we chipped hardened gypsum dust off the four-year-old RAM and motherboard. That router ran for another three years, until we left in 2007. IT equipment is much tougher than most people realize.