Mar 15 2009
Putting idle servers to sleep when they're not in use is part of University of Michigan researchers' plan to save up to 75 percent of the energy that power-hungry computer data centers consume.
Data centers, central to the nation's cyberinfrastructure, house computing, networking and storage equipment. Each time you make an ATM withdrawal, search the Internet or make a cell phone call, your request is routed through a data center.
Thomas Wenisch, assistant professor in the Department of Electrical Engineering and Computer Science, and students David Meisner and Brian Gold will present a paper about improving the energy efficiency of data center computer systems on March 10 at the International Conference on Architectural Support for Programming Languages and Operating Systems in Washington, D.C.
Wenisch and the students analyzed data center workloads and power consumption and used mathematical modeling to develop their approach.
The approach includes PowerNap, the plan to put idle servers to sleep, and RAILS, a more efficient power supplying technique. (RAILS stands for Redundant Array for Inexpensive Load Sharing.)
The Environmental Protection Agency expects the energy consumption of the nation's data centers to exceed 100 billion kWh by 2011, for an annual electricity cost of $7.4 billion. Those figures are about twice what they were in 2006, when data centers already drew more electricity than 5.8 million U.S. households.
Data centers waste most of the energy they draw. The facilities are inefficient because they must be ready for peak processing demands much higher than the average demand.
"For the typical industrial data center, the average utilization is 20 to 30 percent. The computers are spending about four-fifths of their time doing nothing," Wenisch said. "And the way we build these computers today, they're still using 60 percent of peak power even when they're doing nothing."
Techniques employed today such as dynamic frequency and voltage scaling don't do enough to conserve power, the researchers say. Instead, servers could sleep periodically like ordinary laptops.
They would have to slumber and wake exceedingly fast, Wenisch says. His detailed analysis of 600 servers illustrates the sporadic and sparse demands on data center servers. Their average idle period is mere hundreds of milliseconds. Their average busy period is even shorter, at tens of milliseconds. A millisecond is one-thousandth of a second.
While PowerNap would require a new operating system to coordinate the instantaneous sleeping and waking, most of the other technologies that would make this possible already exist, Wenisch says.
"There aren't really technological barriers to achieving this," Wenisch said. "The individual components know how to go to sleep fast. Engineers have developed that technology for laptops and smart phones. But the pieces haven't been used in servers where you don't have a user closing the lid. The components are out there, but the system needs to be redesigned."
While the computer parts might not be hard to find, the power supply would need to be overhauled for PowerNap to work properly, the researchers say. Their new RAILS technique addresses this problem.
Today's power supply technique for stacking "blade-based" servers connects about 16 computers to a handful of 2,250-watt power supplies. The arrangement is inefficient unless the machines are running full steam.
To cut down on the power loss, RAILS would replace the one 2,250-watt power supply with a bunch of smaller, 500-watt power supplies. RAILS would be a necessary complement to PowerNap because without it, even sleeping servers would waste energy.
"Together, these approaches can help make data centers green and solve these big energy efficiency challenges," Wenisch said.
This research is funded by the National Science Foundation and Intel. The paper is called "PowerNap: Eliminating Server Idle Power." David Meisner, first author of the paper, is a doctoral student in the U-M division of Computer Science and Engineering. Brian Gold, a co-author of the paper, is a doctoral student in electrical and computer engineering at Carnegie Mellon University.
U-M has filed for patent protection on the technology, and is currently seeking an industry partner to help bring the technology to market.
Simple data center and server initiatives underway at the University of Michigan are reducing computing energy levels by 10 percent, which equals $500,000 annually, says Tim Slottow, U-M executive vice president and chief financial officer.
"Green computing is a wide-open environmental frontier and through Climate Savers Computing Initiative, the University is implementing data center and server green computing best practices. More sophisticated solutions such as PowerNap and RAILS could exponentially increase our energy savings," Slottow said.