Public Debate About Private Clouds

There’s been more discussion in recent days about “private” clouds in enterprise data centers, as opposed to “public” clouds running on infrastructure from third-party service providers. Some relevant links:

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. A Private Cloud defeats the purpose for most applications. The idea here is to get rid of Tier 1 and Tier 2 internal data center models and focus your own capital, (intellectual, human and dollar), on Tier 3 to Tier 4 models. The colo space is in a much better position to optimize for the lower tier models which is where cloud computing in today's world lives. Who would ever want to own the infrastructure to support thousands and thousands of servers that alone perform simple compute functions but when taken together perform complex operations that give a business competitive advantage. The kicker in that statement is that most compute farms, (clouds, clusters, etc.), are designed to take a 25% to 40% hit and lose only minimal functionality. This model lends itself nicely to a Tier 2 data center facility an again points to a colo partner as your best fit for this application.

  2. As a software designer/architect that does in-house development for a large company, a private cloud would help things tremendously. There are two main factors why I make this statement: 1) My company is paranoid about allowing their data, intellectual property, applications, etc. go outside of their own data center and networks. As much as it would be nice to break this, it is just too strong of a matter and does not lie with technology, but the business unit instead. 2) With many small projects starting up all the time, there is simply too much overhead to setup storage, database, and servers (even if they are VM). This adds to development and maintenance costs considerably. Additionally the more you can abstract yourself away from the filesystem the better. As soon as you have to know about the file system (as opposed to using a storage solution such as S3), your application dependencies shoot through the roof. I had not seen Cisco VFrame yet, so was excited to explore that, however it seems that there still is not S3 storage like model with it that eliminates the tie to an application knowing about a traditional filesystem.