Blade Servers and the Density Dilemma
Are blade servers the answer? That depends upon the question, and some data center operators should be asking more questions before looking to blades, according to Microsoft’s James Hamilton. High-density blade server installations can create as many problems as they solve, James argues in a thorough examination of server density on his Perspectives blog.
Hamilton points out that filling racks with blade servers can result in rack power loads of 25kW and beyond, which usually leads to liquid cooling solutions – which may not have been factored into the original cost/benefit analysis for the blade servers. It’s an informative look at power, space, cooling and PUE in evaluating the cost of optimizing your data center.
“I’m not saying that there aren’t good reason to buy high density server designs,” Hamilton writes. “I’ve seen many. What I’m arguing is that many folks that purchase blades, don’t need them. The arguments explaining the higher value often don’t stand scrutiny. Many experience cooling problems after purchasing blade racks. … In short, many data center purchases don’t really get the ‘work done per dollar’ scrutiny that they should get.”
A RamirezPosted September 12th, 2008
Look at the power efficiency per blade compared to regular rackmount servers. You will find that blades are up to 40% more efficienct in power terms. When you are deploying 1000+ servers in a datacenter, this rapidly becomes a compelling argument.
The “blades are too dense” argument is not sensible. Put less blade chassis per rack if you cannot handle the density. Even at one chassis per rack (say 4-5kW), you are still deploying a very significant compute resource, and most likely getting more compute than you would otherwise deploy in the entire rack.
40% more efficient? really? this is what the numbers say, but it is not what I have seen in testing. If you use reasonably efficient power supplies (80+ gold) blade servers often use as much power as measured at the PDU as rackmount servers do , and they are much cheaper, and much more flexible. (Note, you do get a boost in efficiency using 208v power instead of 120v power, but nearly all hardware built in the last 10 years can handle both.)
Now, if you want to save power, start looking at CPUs with better performance per watt values. Consolidate your old servers; Use virtualization if consolidating on to the same server is too complex.
Most blade enclosures have at most 16 blades. Sometimes you have to reduce that to 8 if you want disk. It’s trivial to fit more than 16 servers in a rack (assuming you have the power budget)
Also, blades nearly always require external disk for most applications (most blade servers I have seen max out with 2×2.5″ SAS drives) rackmount servers give you the flexibility to use cheap local storage.
hah. in my last comment I said
“40% more efficient? really? this is what the numbers say, but it is not what I have seen in testing. If you use reasonably efficient power supplies (80+ gold) blade servers often use as much power as measured at the PDU as rackmount servers do , and they are much cheaper, and much more flexible. (Note, you do get a boost in efficiency using 208v power instead of 120v power, but nearly all hardware built in the last 10 years can handle both.)”
which is the opposite of what I had to say; while blade servers and rack-mount servers consume a similar amount of power, it is rack mount servers that are much cheaper and more flexible, not blade servers.