The Next Energy Challenge of Computing

Sumit Sadana<br/>SanDiskSumit Sadana

Sumit Sadana is Executive Vice President, Chief Strategy Officer and General Manager of Enterprise Solutions for SanDisk.

Computing always seems to be facing an energy crisis.

In the 1940s, mainframes were powered by power-hungry (and fragile) vacuum tubes. If you tried to make a Google data center out of early supercomputers like the ENIAC, it would consume as much energy as all of Manhattan.

Back in the ’90s and early 2000s, chip designers warned that chips could begin to emit the same amount of heat—for their size—as rocket nozzles or nuclear power plants, a trend that was stemmed with the advent of multithreaded and multicore devices.

Virtualization, new data management strategies, and innovative cooling technologies implemented over the past decade, meanwhile, helped pave the way for hyperscale data centers. Ebay, for instance, saved $2 million in data center energy costs by slightly changing its software code on some applications.

So, have these latest advances taken us to energy efficiency nirvana? Not by any means. We’re still using far more than we need. The National Resources Defense Council estimates that data center energy consumption in the U.S. alone could be cut by 40 percent with existing technologies and more effective monitoring, saving their owners $3.8 billion a year and reducing millions of tons of emissions.

Just as demand for data centers will continue to grow, so will energy needs. Rapid access to data is the lifeblood of the global economy. Businesses and organizations will soar or sink on their ability to leverage data to achieve new scientific breakthroughs, improve customer service or gain market share. With data center construction growing at 21 percent a year and more countries implementing carbon policies, taking a business as usual approach to energy will only create headaches down the road.

In the next wave of efficiency, expect to see a tremendous amount of focus on software-defined storage (SDS) and flash memory. Why storage? For one thing, many have already adopted virtualization for servers to raise utilization above the anemic 6 percent to 12 percent levels of the recent past. Storage is today’s low-hanging fruit.

Second, storage is in the midst of a once-in-a-generation transformation. Flash memory, the primary storage technology for digital cameras and cellular phones, has been moving into data centers over the past few years. Flash systems can deliver data at a faster rate and with far less energy. You’ll see new data center architectures that incorporate both technologies in a way that maximizes bits, bandwidth and electrons. You can analogize the impact of flash will have on data centers to the impact fiber optics had on communications: by improving performance and efficiency at the same time, you fundamentally change what’s possible.

A hard drive-based storage system for a 50TB database, for example, might require a power budget of 8,800 watts (4,000 watts to run the storage system and 4,800 for cooling.) A similar system could be built with SSDs with a power budget 1250 watts (568 watts for systems and 682 watts for cooling), an 85 percent savings.

Energy savings can further be increased by leveraging the increased data throughput to reduce the number of servers needed. Companies such as Pandora and AT Internet, in fact, have managed to reduce server count by 40 to 75 percent. More is accomplished with less.

Beyond the data center, flash will pave the way for the Internet of Things. McKinsey & Co. estimates that $5.5 trillion worth of economic value could be generated by integrating IoT technologies into heavy industry with a substantial portion of the savings coming from efficiency. Industry consumes more than half of the energy in the world, even more than transportation. Experts estimate that industrial customers could further reduce their consumption by an additional 14 to 22 percent with things like intelligent HVAC and leveraging data to control production. Even if we could only harvest a fraction of the potential through intelligent systems, the impact would be significant.

The impact will even be more profound in emerging nations like Nigeria, India and China where the spread of technology can be hampered by blackouts, power theft and weak grid infrastructure. By consuming less energy, technology becomes more robust, economical and versatile. It’s that simple.

Energy concerns won’t stop the digital revolution. However, we are going to need to take actions so energy won’t slow it down.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.



Add Your Comments

  • (will not be published)

One Comment

  1. Sudhir Brahma

    A 50TB Database array can be built from 25 nos 2 TB drives- enterprise grade near-line. To do a 100% redundancy, with RAID 1, u need 50 drives. Power consumed by each is 6.14 watts ( Total power : 6.14X50=307 watts. Double that for cooling requirements and u have 614 watts to be yr calculations, how do u arrive at a figure of 8800 watts instead of only 614 watts for a 50TB database array with 100% redandancy? Going by your calculations of a SSD, 50TB array, which consumes 1250 watts, is 2 times worse- power consumption wise too.....something is wrong.