AlliedControl-DataTank

DataTank: Immersion Containers for Industrial Bitcoin Mining

2 comments

As larger players enter the Bitcoin mining space, data center providers are customizing solutions to deliver greater density and efficiency for custom mining infrastructure. The latest example is DataTank, a new offering from Allied Control that houses ultra-high density Bitcoin hardware in immersion cooling tanks inside a data center container.

Allied Control, a Hong Kong-based engineering company, says DataTank can “future proof” Bitcoin infrastructure, allowing miners to quickly refresh their hardware as more powerful systems are unveiled in the fast-moving technology arms race in cryptocurrency mining.

The containerized solution, which was introduced for the Inside Bitcoins conference in New York, can house 1.2 megawatts of mining gear, housed in tanks filled with Novec, a liquid cooling solution created by 3M. Each container can house six tanks, each supporting 200kW of hardware.

Inside each tank, densely-packed boards of ASICs (Application Specific Integrated Circuits) run constantly as they crunch data for creating and tracking bitcoins. As the chips generate heat, the Novec boils off, removing the heat as it changes from liquid to gas.

Modular: the Right Form Factor for Bitcoin Mega-Mines?

Allied Control says modular form factors are ideal for Bitcoin mining, limiting the amount of infrastructure needed. No raised floor environments or room-level temperature control is required, allowing the system to run with extreme efficiency (Allied Control claims a PUE of 1.01) in geography, including warmer climates in Asia. Hardware can be reduced to chips on boards, and easily changed out as more powerful systems are released for a “true wash-rinse-repeat experience” of refreshes.

“Imagine you can make computers that consist of not much more than chips on boards,” said Kar-Wing Lau, VP of Operations at Allied Control. “You don’t have to worry about heat dissipation, power delivery, fans, heatsinks, waterblocks, pumps, or the mechanical infrastructure to stitch all that together. Systems cost less to make and don’t produce more e-waste than the strict minimum. They basically make money faster for the business that uses them, and they run extremely efficient with almost no energy wasted for cooling.”

Allied Control says it is in talks with several large Bitcoin mining operations about setting up data centers in the U.S. that coudl scale to 10 megawatts of capacity. It says the 1.2 megawatt capacity of the containers allows them to be located in areas with modest power capacity, and be distributed across multiple markets.

allied-control-container

Allied Control’s vision for what a multi-module Bitcoin mining center might look like. (Image: Allied Control)

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

2 Comments

  1. This technology is a game changer. Blunthammer has been watching this technology since last year. The practical applications: Delivering 1200 Kw+ in a 1000 square foot area is dense It cools the hottest computers in the world for free. Chip manufacturers no longer have a thermal ceiling Hardware manufacturers no longer have to deliver fans and cases Enterprises can run a handful of smaller, hotter, powerful computers in a container in a third party site or on their campus. This solves the major heat problem for miners AND for data centers in a palpable way, and will ultimately accelerate the ability to create Bitcoin for broader adoption as a virtual currency. If I am a mining operation, I am looking at this technology seriously - it cuts my power bill by 30-50% day one. And every day after. If I am a rig manufacturer, I am signing a distribution & hosting agreement to use these so I can stop paying for fans and cases and can increase my margins. If I am a data center company that boasts hi density capabilities, I am scared because 1.2MW in 1k feet blows any claims out of the water. It means I can put the equivalent of a 10,000 square foot pod in a container. Much like the computing power of a smartphone today dwarfing the computing power of a room size unit 30 years ago. If I am a high performance computing user or manufacturer, I just lowered my cooling costs (for my company or clients) to $0, and removed any limitation that heat places on the chip design process, allowing me to design new hotter and faster chips knowing there is a way to cool them. The fact that there is an actual customer that has a deployed system means this isn't design-ware, it exists. Game changer indeed...

  2. Using a standard working cell of 25 square foot per cabinet you can support 40 very high density cabinets in 1,000 square feet at an average of 30 kW per cabinet air cooled for the same power density as this solution. This isn’t vaporware; the technology has been available for several years and is currently in operation. I recently completed a conceptual Tier III design for a customer that is almost identical to this footprint. It utilizes hot air containment at the cabinet level and 100% external, readily available COTS infrastructure components. The infrastructure ROM including a fully redundant cooling plant (to include water side economization), 2N emergency power generation and distribution, and cabinets & power distribution came in at a fraction of what a comparable “containerized” or “modular” solution would cost. I am sure that there is a niche market for highly specialized liquid cooling systems such as this (e.g., skinless servers packed with ASICs) but for the vast majority it is simply too complex and costly. There is no mention of the level of redundancy in this system and it doesn’t include cost estimates for comparison. Don’t forget that every “container” requires power infrastructure and in some cases a cooling plant. You can usually find those units just outside of the view of the picture. Mark I have to disagree on several points. First, this solution isn’t “free” and the cost isn’t “0”. You still have to pay for 1.2 MW of power for the processing hardware as well as the cooling plant (pumps, etc.). Second, the thermal limitations for a processor don’t magically disappear because you submerge them in liquid. Third, I am not sure where your “10,000 square foot pod in a container” is going. A 10,000 square data center can theoretically support 400 cabinets. While I can’t think of a single customer that would require 400 cabinets at 30 kW each I can think of several that are that size utilizing density zones to support a subset of very high density cabinets.