Friday, February 20, 2009

Intel Eyes Cloud Computing With New Hardware, Software

Intel is making a push into cloud computing with forthcoming changes in its Nehalem server line aimed at large data-center deployments.

As part of that initiative, the company earlier this week outlined hardware and software updates that it said will lead to energy savings and offer the scalability necessary for cloud-computing services.

Intel hopes to provide technology for low-range and midrange servers that can share workloads effectively if demand for a cloud application spikes, said Jason Waxman, general manager of high density computing at Intel. Server deployments would depend on resources needed by each cloud, with some requiring faster network connections or more memory. For example, hardware needs of a multimedia-intensive service like Google Earth would differ from those of an e-mail service like Gmail, Waxman said.

In addition to providing servers that deliver efficient cloud services, Intel wants the servers to be power-efficient. Waxman said that power consumption and cooling accounts for up to 23 percent of server deployments, so the company is building motherboards that could help cool systems efficiently while reducing energy costs.

Intel is developing a new motherboard, designed for servers used in cloud computing, that reduces power drawn to 85 watts in idle compared to 115 watts for standard Nehalem-based boards. A reduction of 30 watts per server could save up to US$8 million in three years in a deployment of 50,000 servers, Intel said.

The upcoming Nehalem-based boards will use Xeon processors due for release later this quarter. Intel will provide the motherboards through partners like Dell, Hewlett-Packard and IBM.

Specific motherboards can help cool down systems efficiently, which could reduce energy costs. Some of the redesigned motherboards remove slots to discourage use of power-hungry components and peripherals like graphics cards and hard drives. Users can instead access centralized storage over a network. Intel is also bunching together hot components so less would be needed to cool a system.

"We've actually worked with certain cloud service providers to ... change the fundamental settings to come up with something in the silicon -- whether it's the chipset or CPU -- to meet a particular optimized need," Waxman said.

The motherboards will include voltage regulators and work with software tools to monitor power consumption. One such software tool, called Dynamic Power Node Manager, will cap and balance power consumption between servers to cut energy costs. Intel tested Node Manager with Chinese search engine Baidu, which saved 40 watts per server during a cloud implementation.

Intel is also providing software tools like compilers and debuggers to improve performance and analyze software code. Optimizing code helps execute tasks more quickly and efficiently while using fewer system resources. That could save up $20 million over three years in a 50,000 server deployment, Waxman said.

The company has worked on optimizing search codes for most of the major search providers, Waxman said.

"We actually have people ... on site with these large cloud service providers doing hands-on tuning -- looking at their workloads ... to get more performance out of it," Waxman said.

One thing Intel can't control is the bottleneck of data throughput caused by slow network connections. Intel hopes to cut that with the VMDQ (Virtual Machine Device Queues) feature, which speeds up data throughput over virtual machines by intelligently queueing up server traffic. Hypervisors on servers with the queueing software work together to split traffic -- like storage and Web traffic -- to balance the traffic over multiple virtual machines. The feature cuts bottlenecks that typically affect a 1G bps network, Waxman said.

"In the past one virtual machine could hog up all the traffic. What you really want to be able to do is put things in a queue," Waxman said.

Taking advantage of virtualization technology, Intel also hopes to standardize the deployment of the DCMI (Data Center Management Interface) protocol across virtualized hardware and software environments to ease data-center task management. The specifications include features to measure power consumption to effectively share resources across a large-scale server deployment. For example, DCMI can cap power consumption on servers and monitor temperature to prevent servers from overheating.

Intel said that about 14 percent of servers purchased today go into a cloud deployment, Waxman said. That number will rise to 25 percent by 2012, with more cloud deployments going in large data centers of 50,000 servers or more, Waxman said.

No comments:

Post a Comment