The Telecosm and the Data Center

Add Your Comments

Andrew Schmitt of the telecom-focused investment research firm Nyquist Capital is blogging the Gilder TELECOSM 2006 conference at Lake Tahoe, and has posted a summary of the panel on data center issues, including a discussion of “storewidth” and the limitations of the interaction between bandwidth and storage as data moves through the data center. Panelists included Google engineer Luiz Barroso and Lane Patterson, chief technologist at Equinix. An excerpt:

This was a very interesting debate among some very heavy hitters who operate data centers about where the bottlenecks are in the data centers, and if the new model of massively distributed computing in one centralized data center is a sustainable model.

George Gilder wondered if “these massive data centers being built next to hydroelectric dams are going to be obsolete in the near future?”


If you’re interested in the technical aspects of data center design and deployment, you’ll find this an interesting read. This was the first time I had encounted Schmitt’s blog, and was glad to find a summary of this panel. He also offered an intriguing prediction about Google’s interests in improving data center efficiency.

“It is clear that opportunities exist for companies to optimize hardware components for new distributed computing architectures,” Schmitt notes. “My prediction: Google will fund startup Memory and Disk companies to supply what they need.”

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.