Much of the talk surrounding power efficiency and energy conservation in the data center focuses on the big things: virtualization, consolidation, alternative heating and cooling.This article was originally published on Thursday Sep 22nd 2011
Increasingly, though, some of the most significant gains are taking place on a smaller scale: the component level and even raw silicon. Individually, the reductions taking place here are infinitesimal. But like a colony or army ants, they can wreak tremendous havoc on high energy bills when deployed on even a moderate scale.
Take DRAM. According to research by IBM, Samsung and Dell, DRAM contributes 22 percent to the power draw at a typical data center. But as the technology migrates from 50 nm designs to 30 nm, energy consumption drops by as much as 60 percent and the devices are able to withstand higher ambient temperatures, allowing IT to raise the thermostat in the entire server room.
On the processor front, new low-power notebooks and desktops are driving more efficient designs. AMD, for instance, has trimmed its integrated CPU/GPU architecture down to 18 watts in the new E-300 and E-450 models. The company says the chips fill a gap between Intel's lower-power but less capable Atom processors and the 35 watt Celeron line.
Intel, of course, isn't sitting still when it comes to the power envelope. The company recently unveiled a prototype of an ultra-low-power device known as a near-threshold voltage processor. The "Claremont" operates at nearly the base level of voltage needed to power transistors and begin channeling electrical current. The chips are so efficient that, accompanied by a highly efficient memory interface, IBM is able to power a Linux OS PC with just a small solar cell. If could be that we're not far off from a potato-powered PC.
At the same time, a UC-Berkeley team has developed a new breed of micro-capacitor, which could be used to build a new generation of heat-free processors. According to eWeek, the device is made from ferroelectric materials and uses a concept called "negative capacitance" to deliver power with very little voltage. The group says the design can be applied to everything from DRAM to electric cars and RF devices, although actual performance and speed levels have yet to be determined.
All of this activity is destined to lower energy bills at data centers regardless of whether they formally adopt green strategies or not. Low-power is becoming a standard feature on server, storage and network devices across the board, which means you are already on your way to becoming more energy efficient even if you weren't planning on it.
Much of the talk surrounding power efficiency and energy conservation in the data center focuses on the big things: virtualization, consolidation, alternative heating and cooling.