DreamWorks Goes Extreme with Scale-Out Storage System
( Page 1 of 2 )
Famed animation studio DreamWorks in April 2009 added Hewlett-Packard's fanciest new storage system, StorageWorks 9100 Extreme Data Storage, to its shops. The scale-out ExDS9100 system acts as an online reference library for DreamWorks' popular 3D films, and DreamWorks has big plans for the system.
Digital video quality is getting richer all the time, as are some lucky producers who hit the jackpot with movies that are box-office smashes.
As video continues to be rendered with more multiple images and as more bits per second are jammed onto disks, storage and accurate recall of all that data becomes an increasingly strategic part of the overall production picture—especially when it comes to stereoscopic 3D movies, which are having a rebirth right now.
Stereoscopic three-dimensional movies that required two analog projectors and red-and-blue glasses to view them were a fad in the 1950s that eventually petered out due to lack of standards, quality controls and other factors.
Resource Library:
But now 3D movies are back in digital form, and they come with a much higher quality quotient. They're also taking up much more capacity in studio data centers; studio IT administrators are well aware of the insatiable nature of the content monster.
DreamWorks is continually buying new storage. "Storage isn't a buying decision anymore," DreamWorks Senior Technologist Skottie Miller told eWEEK in 2008. "It's a way of life."
Here's a stark example of this dilemma: When DreamWorks' first "Shrek" movie debuted in May 2001, it required about 6TB of capacity in DreamWorks' data centers. Eight years later, the studio's most recent release, "Monsters vs. Aliens," requires a bit more elbow room—as in 93TB of capacity.
Both movies took more than four years to create and produce. Both have about the same running time: "Shrek" is 90 minutes, "Monsters vs. Aliens" is 94 minutes. There's simply a lot more depth of field, colors, action and special effects as the movies get increasingly sophisticated.
The bottom line: If you're going to have a quality product, you have to make a home for it. With all the new content pouring into its coffers on a 24/7 basis from its artists, DreamWorks had to figure out how to classify and store all those terabytes of video—and in an easily accessible archiving system.
DreamWorks' storage systems, located in data centers in Northern and Southern California and in Bangalore, India, use products from Hewlett-Packard, NetApp and Ibrix for different duties. Extremely powerful dual-core Intel "Woodcrest"-powered workstations have been supplied by HP for the last eight years.
In April 2009, the studio—which has a longstanding relationship with HP—added the company's newest package, the HP StorageWorks 9100 Extreme Data Storage System. This scale-out system acts as an online reference library for "Monsters vs. Aliens" and previous films, such as "Madagascar," "Bee Movie" and "Kung Fu Panda."
"Scale-out" is a relatively recent data center industry buzzword referring to architectures for systems running thousands of servers that are required to scale nearly ad infinitum in order to comfortably handle massive workloads.
Production isn't going to be slowing down any time soon, with all the potential profits to be made. As of June 15, "Monsters vs. Aliens" had banked $195,246,609, according to industry researcher Box Office Mojo.
"Not only are we making more 3D-type movies, but we're ramping up our production schedule from four movies every two years to five movies in two years," Derek Chan, head of digital operations for DreamWorks Animation, told eWEEK.
Wednesday, June 17, 2009
Quantum CEO Offers His Take on EMC's Pursuit of Data Domain
Veteran IT executive Rick Belluzzo tells eWEEK he believes EMC's motivation is more about keeping Data Domain out of NetApp's hands than about the acquisition of highly regarded deduplication software.
A lot of people in the data storage world are watching the skirmish between EMC and NetApp as they contest who is going to acquire deduplication specialist Data Domain.
The prevailing wisdom is that EMC and NetApp both want to acquire Data Domain for its highly regarded, midrange-market storage deduplication appliances, which are fast, cost-effective and easy to use.
But at least one key figure, Quantum CEO Rick Belluzzo, thinks there may be more to it than that.
A little background: Data Domain's board of directors accepted a revised offer of $1.9 billion in cash and stock from NetApp on June 3 and intends to stick with it. EMC, swimming in cash, has offered a cool, unsolicited $1.8 billion in straight dollars to complete the transaction.
It's also no secret that EMC would like to extend its reach into the midrange and SMB storage markets during the next few years, and Data Domain would be one way to improve that standing immediately.
Resource Library:
But there is a difference of opinion about what exactly EMC's motives are.
All these merger and acquisition shenanigans don't play all that well with Quantum, a longtime EMC partner that supplies both deduplication software and virtually all the tape storage that EMC sells.
Even though Data Domain has accepted the NetApp offer, there are many people in the business who will not yet write off EMC. EMC is an acquisition-oriented company accustomed to getting what it wants. The Hopkinton, Mass.-based company has taken over about 50 storage- and security-related firms in the last six years.
If EMC were to acquire Data Domain and its prized deduplication wares, that would give EMC no fewer than three different deduplication products. EMC started by buying Avamar Technologies in 2006 for $165 million, then signed a partnership with Quantum and now is trying to add Data Domain.
How many dedupe choices does one company need?
Belluzzo is quite aware of all this and is convinced that EMC's dance with Data Domain is more about keeping Data Domain out of NetApp's grasp than it is about acquiring any additional technology—no matter how good it may be.
"EMC already has a great relationship with us for enterprise dedupe, and it is working well for both companies," Belluzzo told eWEEK. "I can't see that adding Data Domain's products really will add that much value in the broader scope of their product line.
"I do think that NetApp, which is a growing company, would benefit greatly by adding Data Domain because it has products that NetApp really needs. I believe that EMC just doesn't want to see that deal happen."
Belluzzo pointed out that Quantum's deduplication software is much better suited for the high-end enterprise market because it scales further than Data Domain's.
"Our DXi [storage system line] can start at less than 1TB and go all the way up to 220TB of usable capacity with a single software architecture. Data Domain tops out at 32TB," Belluzzo said.
"Quantum also provides tight integration with tape machines, and features policy-based deduplication and replication capacities for both VTLs [virtual tape libraries] and NAS [network-attached disk storage], and it's all centrally managed."
Last week, Quantum introduced its DXi2500-D, a high-performance, deduplication appliance for remote and branch offices optimized for replication back to a central data center.
"This has about four to five times the capacity of Data Domain's appliance at about the same price," Belluzzo said.
EMC CEO, President and Chairman Joe Tucci has said publicly that he admires Data Domain for its accomplishments and that the smaller company often reminds him of EMC itself.
Tucci also has said that if EMC does acquire Data Domain, EMC will expand development of one of its flagship products that uses Quantum's software.
Another interesting bond between Quantum and EMC, Belluzzo said, is the fact that EMC recently loaned $75 million to Quantum to improve Quantum's capital structure.
If EMC can persuade Data Domain's board and stockholders to reject the NetApp deal, it will gain a stronger presence in the midrange storage market and gain a lot of loyal Data Domain customers. The midmarket, where Data Domain is a growing supplier, is by far the fastest-developing segment of the overall market.
A lot of people in the data storage world are watching the skirmish between EMC and NetApp as they contest who is going to acquire deduplication specialist Data Domain.
The prevailing wisdom is that EMC and NetApp both want to acquire Data Domain for its highly regarded, midrange-market storage deduplication appliances, which are fast, cost-effective and easy to use.
But at least one key figure, Quantum CEO Rick Belluzzo, thinks there may be more to it than that.
A little background: Data Domain's board of directors accepted a revised offer of $1.9 billion in cash and stock from NetApp on June 3 and intends to stick with it. EMC, swimming in cash, has offered a cool, unsolicited $1.8 billion in straight dollars to complete the transaction.
It's also no secret that EMC would like to extend its reach into the midrange and SMB storage markets during the next few years, and Data Domain would be one way to improve that standing immediately.
Resource Library:
But there is a difference of opinion about what exactly EMC's motives are.
All these merger and acquisition shenanigans don't play all that well with Quantum, a longtime EMC partner that supplies both deduplication software and virtually all the tape storage that EMC sells.
Even though Data Domain has accepted the NetApp offer, there are many people in the business who will not yet write off EMC. EMC is an acquisition-oriented company accustomed to getting what it wants. The Hopkinton, Mass.-based company has taken over about 50 storage- and security-related firms in the last six years.
If EMC were to acquire Data Domain and its prized deduplication wares, that would give EMC no fewer than three different deduplication products. EMC started by buying Avamar Technologies in 2006 for $165 million, then signed a partnership with Quantum and now is trying to add Data Domain.
How many dedupe choices does one company need?
Belluzzo is quite aware of all this and is convinced that EMC's dance with Data Domain is more about keeping Data Domain out of NetApp's grasp than it is about acquiring any additional technology—no matter how good it may be.
"EMC already has a great relationship with us for enterprise dedupe, and it is working well for both companies," Belluzzo told eWEEK. "I can't see that adding Data Domain's products really will add that much value in the broader scope of their product line.
"I do think that NetApp, which is a growing company, would benefit greatly by adding Data Domain because it has products that NetApp really needs. I believe that EMC just doesn't want to see that deal happen."
Belluzzo pointed out that Quantum's deduplication software is much better suited for the high-end enterprise market because it scales further than Data Domain's.
"Our DXi [storage system line] can start at less than 1TB and go all the way up to 220TB of usable capacity with a single software architecture. Data Domain tops out at 32TB," Belluzzo said.
"Quantum also provides tight integration with tape machines, and features policy-based deduplication and replication capacities for both VTLs [virtual tape libraries] and NAS [network-attached disk storage], and it's all centrally managed."
Last week, Quantum introduced its DXi2500-D, a high-performance, deduplication appliance for remote and branch offices optimized for replication back to a central data center.
"This has about four to five times the capacity of Data Domain's appliance at about the same price," Belluzzo said.
EMC CEO, President and Chairman Joe Tucci has said publicly that he admires Data Domain for its accomplishments and that the smaller company often reminds him of EMC itself.
Tucci also has said that if EMC does acquire Data Domain, EMC will expand development of one of its flagship products that uses Quantum's software.
Another interesting bond between Quantum and EMC, Belluzzo said, is the fact that EMC recently loaned $75 million to Quantum to improve Quantum's capital structure.
If EMC can persuade Data Domain's board and stockholders to reject the NetApp deal, it will gain a stronger presence in the midrange storage market and gain a lot of loyal Data Domain customers. The midmarket, where Data Domain is a growing supplier, is by far the fastest-developing segment of the overall market.
HP, VMware Team Up on Virtualization Management
HP and VMware are growing their partnership to make it easier for businesses to manage their virtualized and physical environments using a single offering. VMware is incorporating HP’s Discover and Dependency Mapping app into its upcoming vCenter ConfigControl software. In addition, HP is support VMware’s ThinApp software with its own Client Automation management software.
Hewlett-Packard and VMware are looking to make it easier for businesses to manage their virtual and physical client and server environments.
The two companies at the HP Software Universe 2009 show June 16 in Las Vegas announced that VMware in 2010 will integrate HP’s Discovery and Dependency Mapping application into its vCenter ConfigControl software.
The combination of the management software offerings will give users of VMware’s vSphere virtualization platform greater visibility into their environments and a better way to map business services in the virtual environment into their physical systems.Resource Library:
Businesses will have a single window through which to also automate such management tasks as change detection, provisioning, patching and compliance and security enforcement, according to officials with HP and VMware.
In addition, HP’s Client Automation management platform will now support VMware’s ThinApp application virtualization technology. The support means that users will have an easier time keeping track of virtualized and physical applications.
VMware ThinApp users can take advantage of the preconfigured templates in HP’s Client Automation software and use reports generated by the HP offering to track virtual and physical applications for enhanced asset management.
The two companies also will develop joint go-to-market and sales programs.
Businesses will get multiple benefits from the HP-VMware collaboration, according to Ramin Sayar, vice president of products, software and solutions at HP.
"Customers are looking for a dramatically better approach to IT management in order to reduce costs and risks, while achieving integrated seamless management of the physical and virtual datacenter," Sayar said in a statement. "The combination of HP software and VMware solutions will provide customers with an end-to-end automated solution for building and managing next-generation datacenters."
Hewlett-Packard and VMware are looking to make it easier for businesses to manage their virtual and physical client and server environments.
The two companies at the HP Software Universe 2009 show June 16 in Las Vegas announced that VMware in 2010 will integrate HP’s Discovery and Dependency Mapping application into its vCenter ConfigControl software.
The combination of the management software offerings will give users of VMware’s vSphere virtualization platform greater visibility into their environments and a better way to map business services in the virtual environment into their physical systems.Resource Library:
Businesses will have a single window through which to also automate such management tasks as change detection, provisioning, patching and compliance and security enforcement, according to officials with HP and VMware.
In addition, HP’s Client Automation management platform will now support VMware’s ThinApp application virtualization technology. The support means that users will have an easier time keeping track of virtualized and physical applications.
VMware ThinApp users can take advantage of the preconfigured templates in HP’s Client Automation software and use reports generated by the HP offering to track virtual and physical applications for enhanced asset management.
The two companies also will develop joint go-to-market and sales programs.
Businesses will get multiple benefits from the HP-VMware collaboration, according to Ramin Sayar, vice president of products, software and solutions at HP.
"Customers are looking for a dramatically better approach to IT management in order to reduce costs and risks, while achieving integrated seamless management of the physical and virtual datacenter," Sayar said in a statement. "The combination of HP software and VMware solutions will provide customers with an end-to-end automated solution for building and managing next-generation datacenters."
Dell Expands Enterprise Technology Portfolio
Dell focuses on virtualization and high-performance computing for its next round of updates to its lines, which include new PowerEdge servers and the EqualLogic PS4000 storage array, as well as new consulting services. The new Dell products have been tailored to focus on the needs of both enterprise and SMBs.
Dell announced a variety of business-centric products and services on June 17, primarily servers and consulting offerings designed to make the creation of data centers, and the implementation of virtualization, a more efficient task for the enterprise and SMBs alike.
The company’s small-to-midsize-business offerings include the EqualLogic PS4000 storage array and the PowerVault NX3000 NAS (network-attached storage) device. The former is designed to offer enterprise-class storage virtualization, thin provisioning and management capabilities, while the latter reduces duplicate files with SIS (Single-Instant Storage) technology while sharing files across Windows and non-Windows clients. The NX3000 can also double as an optional iSCSI target in the support of application data.
Dell is also introducing the PowerEdge T410 and T710 tower servers, which are aimed specifically at both SMBs wrestling with small amounts of space and businesses across the size range looking to boost their productivity.Resource Library:
For more specs and photos of Dell's newest servers, click here.
At 24 inches deep, the short chassis of the PowerEdge T410 is designed to compact itself into tight spaces and shallow racks. Both the T410 and the T710 feature one-button deployment via the Lifecycle Controller, and the T710 can include up to 16 drives for large local storage capacity.
On the high-performance computing end, Dell is also introducing the PowerEdge R410, designed for intensive high-performance computing workloads. The company claims these new servers have a 73 percent performance improvement over the last generation, while also saving power. The device includes the DMC (Dell Management Console) and Dell Lifecycle Controller for simplified management.
In addition, Dell is pairing its new hardware lines with a variety of consulting and other services designed to streamline implementation into both SMBs and the enterprise. The Dell ProConsult offerings include five consulting options designed to optimize the data center, each focusing on a specific area: Platform Optimization & Virtualization, Data Center Planning & Management, Disaster Recovery, Data Management, and Facilities Efficiency.
Dell is also offering two “business-ready” virtualization configurations to introduce a virtual enterprise infrastructure into a business. These two infrastructures include Data Center Virtualization Configuration, which combines Dell PowerEdge M-series blades and EqualLogic PS600 iSCSI storage technology, Cisco Catalyst networking switches, VMware vSphere 4 and Platspin Migrate from Novell; and Small and Medium Business Virtualization Configuration, which combines a variety of hardware, including the PowerEdge R710 and the Dell PowerVault MD3000i, with Microsoft’s virtualization suite.
Other additions to Dell’s virtual solutions line include enhanced hypervisors, with support for VMware vSphere 4, Windows Server 2208 R2 Hyper-V and Citrix Essentials for XenServer 5.0; enhanced disaster recovery with reduced application downtime and managed data protection; and application virtualization. The Dell ProManage Virtual Server Remote Monitoring and Reporting allows IT administrators improved visibility into VM performance, to determine the average utilization of everything from processor to disk at the VM level and to, ultimately, save time.
Dell announced a variety of business-centric products and services on June 17, primarily servers and consulting offerings designed to make the creation of data centers, and the implementation of virtualization, a more efficient task for the enterprise and SMBs alike.
The company’s small-to-midsize-business offerings include the EqualLogic PS4000 storage array and the PowerVault NX3000 NAS (network-attached storage) device. The former is designed to offer enterprise-class storage virtualization, thin provisioning and management capabilities, while the latter reduces duplicate files with SIS (Single-Instant Storage) technology while sharing files across Windows and non-Windows clients. The NX3000 can also double as an optional iSCSI target in the support of application data.
Dell is also introducing the PowerEdge T410 and T710 tower servers, which are aimed specifically at both SMBs wrestling with small amounts of space and businesses across the size range looking to boost their productivity.Resource Library:
For more specs and photos of Dell's newest servers, click here.
At 24 inches deep, the short chassis of the PowerEdge T410 is designed to compact itself into tight spaces and shallow racks. Both the T410 and the T710 feature one-button deployment via the Lifecycle Controller, and the T710 can include up to 16 drives for large local storage capacity.
On the high-performance computing end, Dell is also introducing the PowerEdge R410, designed for intensive high-performance computing workloads. The company claims these new servers have a 73 percent performance improvement over the last generation, while also saving power. The device includes the DMC (Dell Management Console) and Dell Lifecycle Controller for simplified management.
In addition, Dell is pairing its new hardware lines with a variety of consulting and other services designed to streamline implementation into both SMBs and the enterprise. The Dell ProConsult offerings include five consulting options designed to optimize the data center, each focusing on a specific area: Platform Optimization & Virtualization, Data Center Planning & Management, Disaster Recovery, Data Management, and Facilities Efficiency.
Dell is also offering two “business-ready” virtualization configurations to introduce a virtual enterprise infrastructure into a business. These two infrastructures include Data Center Virtualization Configuration, which combines Dell PowerEdge M-series blades and EqualLogic PS600 iSCSI storage technology, Cisco Catalyst networking switches, VMware vSphere 4 and Platspin Migrate from Novell; and Small and Medium Business Virtualization Configuration, which combines a variety of hardware, including the PowerEdge R710 and the Dell PowerVault MD3000i, with Microsoft’s virtualization suite.
Other additions to Dell’s virtual solutions line include enhanced hypervisors, with support for VMware vSphere 4, Windows Server 2208 R2 Hyper-V and Citrix Essentials for XenServer 5.0; enhanced disaster recovery with reduced application downtime and managed data protection; and application virtualization. The Dell ProManage Virtual Server Remote Monitoring and Reporting allows IT administrators improved visibility into VM performance, to determine the average utilization of everything from processor to disk at the VM level and to, ultimately, save time.
Friday, June 12, 2009
RingCube Takes On VMware, Citrix in Desktop Virtualization
RingCube is unveiling vDesk 2.0, the latest version of its desktop virtualization product. A key new offering within vDesk 2.0 is the Workspace Virtualization Engine, which is designed to make it easier for enterprises to manage, deploy and secure their desktop virtualization environments. It also is a key differentiator for RingCube in a competitive space that includes VMware and Citrix, RingCube officials say.
RingCube Technologies is rolling out the next generation of its vDesk desktop virtualization technology, including a new feature designed to improve the manageability and security around the offering.
RingCube’s vDesk 2.0, announced May 1, includes the company’s WVE (Workspace Virtualization Engine), which company officials say is a key differentiator in a highly competitive field that includes such companies as VMware and Citrix Systems.
It also comes the same week that Quest Software, at the Microsoft Management Summit in Las Vegas, announced it was integrating its Quest vWorkspace virtual desktop management offering with Microsoft System Center Virtual Machine Manager and Microsoft App-V (Application Virtualization) technology.
Doug Dooley, vice president of product management at RingCube, said the company is looking to separate itself from other vendors in the desktop virtualization space by coming out with solutions that don’t require a lot of upfront costs or require a lot of duplicate Windows licenses.
VDI (virtual desktop infrastructure) solutions require high upfront costs—sometimes in the millions of dollars—and they bring with them more storage and power and cooling expenses, Dooley said. By comparison, a vDesk solution for 2,500 users runs around $500,000, he said.Resource Library:
In addition, mobility is an issue with VDI, Dooley said.
An eWEEK Labs analyst says there's no need to rush into VDI.
RingCube’s vDesk offering is designed to enable enterprise users to put the technology on their work PCs or on unmanaged systems, such as their home computers. When they turn on vDesk, it gives them a personalized virtual workspace, complete with their own settings, files, applications and desktop, Dooley said. The company’s MobileSync technology then lets users synchronize their vDesk workspace between PCs, USB drives or other portable media, a network file share or VDI environments.
RingCube’s WVE in vDesk 2.0 offers what Dooley called a lightweight virtual desktop, with an isolated network stack and support for such applications as endpoint security, databases and PC management software, which require drivers and security services.
Among the components of WVE are vDeskNet, which enables virtual networking by separating and isolating network traffic from the host PC, and virtual user management, which gives the virtual workspace a unique set of user accounts separate from the host PC.
The Virtual Security Store offers a separate storage area within the virtual workspace for such items as certificates, and Virtual Windows Services offer improve application isolation from the host machine.
Other security and isolation controls in vDesk 2.0 come through virtual workspace encryption though integration with third-party software, as well as a virtual networking stack that isolates all network traffic inside the virtual workspace from the host system.
The goal is to give users an easier and more secure way to run a virtual desktop environment, Dooley said.
“This thing is not the hardest thing to get your arms around as far as deployment is concerned,” he said.
The vDesk solution also offers improved management enabling enterprises to create single workspace, then give employees their own version of that master copy. There is also a more streamlined log-in process.
Dooley said businesses are beginning to take a hard look at desktop virtualization solutions, driving in large part by the need to reduce operating and capital costs and to improve business continuity.
“It’s so early in the [desktop virtualization space],” he said. “We are where we were with server virtualization five years ago.”
Dooley said he expects interest in desktop virtualization to grow, and sees Microsoft’s upcoming introduction of Windows 7 as a driver to get enterprises thinking more about their desktop environments.
“I don’t think people are going to stay on the status quo forever,” he said.
RingCube’s vDesk 2.0 is available immediately, staring at $200 per user. RingCube also will be showcasing the new offering at the Citrix Synergy show May 5-6 in Las Vegas.
RingCube Technologies is rolling out the next generation of its vDesk desktop virtualization technology, including a new feature designed to improve the manageability and security around the offering.
RingCube’s vDesk 2.0, announced May 1, includes the company’s WVE (Workspace Virtualization Engine), which company officials say is a key differentiator in a highly competitive field that includes such companies as VMware and Citrix Systems.
It also comes the same week that Quest Software, at the Microsoft Management Summit in Las Vegas, announced it was integrating its Quest vWorkspace virtual desktop management offering with Microsoft System Center Virtual Machine Manager and Microsoft App-V (Application Virtualization) technology.
Doug Dooley, vice president of product management at RingCube, said the company is looking to separate itself from other vendors in the desktop virtualization space by coming out with solutions that don’t require a lot of upfront costs or require a lot of duplicate Windows licenses.
VDI (virtual desktop infrastructure) solutions require high upfront costs—sometimes in the millions of dollars—and they bring with them more storage and power and cooling expenses, Dooley said. By comparison, a vDesk solution for 2,500 users runs around $500,000, he said.Resource Library:
In addition, mobility is an issue with VDI, Dooley said.
An eWEEK Labs analyst says there's no need to rush into VDI.
RingCube’s vDesk offering is designed to enable enterprise users to put the technology on their work PCs or on unmanaged systems, such as their home computers. When they turn on vDesk, it gives them a personalized virtual workspace, complete with their own settings, files, applications and desktop, Dooley said. The company’s MobileSync technology then lets users synchronize their vDesk workspace between PCs, USB drives or other portable media, a network file share or VDI environments.
RingCube’s WVE in vDesk 2.0 offers what Dooley called a lightweight virtual desktop, with an isolated network stack and support for such applications as endpoint security, databases and PC management software, which require drivers and security services.
Among the components of WVE are vDeskNet, which enables virtual networking by separating and isolating network traffic from the host PC, and virtual user management, which gives the virtual workspace a unique set of user accounts separate from the host PC.
The Virtual Security Store offers a separate storage area within the virtual workspace for such items as certificates, and Virtual Windows Services offer improve application isolation from the host machine.
Other security and isolation controls in vDesk 2.0 come through virtual workspace encryption though integration with third-party software, as well as a virtual networking stack that isolates all network traffic inside the virtual workspace from the host system.
The goal is to give users an easier and more secure way to run a virtual desktop environment, Dooley said.
“This thing is not the hardest thing to get your arms around as far as deployment is concerned,” he said.
The vDesk solution also offers improved management enabling enterprises to create single workspace, then give employees their own version of that master copy. There is also a more streamlined log-in process.
Dooley said businesses are beginning to take a hard look at desktop virtualization solutions, driving in large part by the need to reduce operating and capital costs and to improve business continuity.
“It’s so early in the [desktop virtualization space],” he said. “We are where we were with server virtualization five years ago.”
Dooley said he expects interest in desktop virtualization to grow, and sees Microsoft’s upcoming introduction of Windows 7 as a driver to get enterprises thinking more about their desktop environments.
“I don’t think people are going to stay on the status quo forever,” he said.
RingCube’s vDesk 2.0 is available immediately, staring at $200 per user. RingCube also will be showcasing the new offering at the Citrix Synergy show May 5-6 in Las Vegas.
Hyperformix Eases Virtualization Capacity Planning
Hyperformix is expanding the capabilities of its Capacity Manager and Data Manager software products to better support virtualized environments. The software offerings are designed to help IT administrators map out the performance and capacity needs of their virtualization initiatives, and also can enable them to better figure out such business problems as server consolidation and application upgrades. The software supports a wide range of virtualization technologies from such vendors as VMware, Microsoft, Citrix, Sun, HP and IBM.
Hyperformix wants to make it easier for businesses to plan out their virtual environments.
Hyperformix is making enhancements to its Capacity Manager and Data Manager offerings that are designed to not only enable enterprises to more effectively map out the performance and capacity needs of their virtualization initiatives, but also to budget for the IT support that will be needed, find ways to reduce costs and extend the value of their current infrastructure.
“Our customers look to us to help them accurately plan and communicate what it will take to support business services in IT, and where cost-saving opportunities exist,” Bruce Milne, vice president of products and marketing for Hyperformix, said in a statement.Resource Library:
Capacity Manager 4.0 and Data Manager 3.1, announced May 5, can automatically identify underutilized virtual machines and systems that can be safely consolidated, according to Hyperformix officials.
The software also offers automated dashboards and reporting capabilities that enable IT administrators to more easily convey complex data to business users and identify cost-saving opportunities through the collection of data on such areas as hardware costs and power consumption.
In addition, the software products support virtualization technology from such vendors as VMware, Microsoft, Citrix Systems, Sun Microsystems, Hewlett-Packard and IBM, as well as modeling hardware, operating systems and other components.
Hyperformix also offers solutions kits designed to help IT administrators and business users figure out such issues as server consolidation and application upgrades.
Hyperformix wants to make it easier for businesses to plan out their virtual environments.
Hyperformix is making enhancements to its Capacity Manager and Data Manager offerings that are designed to not only enable enterprises to more effectively map out the performance and capacity needs of their virtualization initiatives, but also to budget for the IT support that will be needed, find ways to reduce costs and extend the value of their current infrastructure.
“Our customers look to us to help them accurately plan and communicate what it will take to support business services in IT, and where cost-saving opportunities exist,” Bruce Milne, vice president of products and marketing for Hyperformix, said in a statement.Resource Library:
Capacity Manager 4.0 and Data Manager 3.1, announced May 5, can automatically identify underutilized virtual machines and systems that can be safely consolidated, according to Hyperformix officials.
The software also offers automated dashboards and reporting capabilities that enable IT administrators to more easily convey complex data to business users and identify cost-saving opportunities through the collection of data on such areas as hardware costs and power consumption.
In addition, the software products support virtualization technology from such vendors as VMware, Microsoft, Citrix Systems, Sun Microsystems, Hewlett-Packard and IBM, as well as modeling hardware, operating systems and other components.
Hyperformix also offers solutions kits designed to help IT administrators and business users figure out such issues as server consolidation and application upgrades.
HyTrust Looks to Build Community Around Virtualization
HyTrust, which launched as a company in early April with the Enterprise Edition of its namesake virtualization management appliance, is rolling out a free Community Edition aimed at SMBs. At the same time, HyTrust also is making a push to create a community around its technology to enable information sharing among users and to speed up its own product development. HyTrust’s technology currently manages VMware environments, though support of Xen and Microsoft’s Hyper-V is on the way.
A month after launching the company with a policy-based management appliance for virtualized environments, officials at HyTrust are now looking to build a community among its customers.
HyTrust May 5 announced that it is releasing a free community edition of its namesake appliance, aimed at giving SMBs a cost-effective way to get into virtualization and cloud computing. The HyTrust Appliance Community Edition, which is designed to give users a central control point for managing and monitoring virtualized environments, also is a tool that enterprises can use to get started, according to HyTrust officials.
The free community version offers the same functionality and features as HyTrust’s Enterprise Edition, but with limitations. For example, users can only have three protected hypervisor hosts.Resource Library:
At the same time, HyTrust is kicking off an online community designed to support its vision of a more easily managed virtualized environment, to create a repository that enables users to share information and give feedback to the company, and to help HyTrust direct its R&D efforts.
The Community Edition lets larger enterprises easily evaluate the capabilities of HyTrust’s technology, and give feedback on their findings. In turn, HyTrust will be able to speed up product development and innovations, officials said.
“The potential for the HyTrust Community is unbounded,” HyTrust CEO Eric Chiu said in a statement. “We see this not only as a terrific opportunity for HyTrust to meet currently unmet needs of the market, but also as a great way for HyTrust to harness the powers of distributed peer review.”
The Community Edition is available to members of the HyTrust Community. To join, click here.
HyTrust launched the company April 7 with the Enterprise Edition, which can be bought as a 1U appliance or as software that can run on the customer’s hardware.
HyTrust currently can manage VMware environments, though it will expand its reach to the Xen hypervisor from Citrix Systems later in the year, he said. The company also is working on products to support infrastructures using Microsoft's Hyper-V technology.
A month after launching the company with a policy-based management appliance for virtualized environments, officials at HyTrust are now looking to build a community among its customers.
HyTrust May 5 announced that it is releasing a free community edition of its namesake appliance, aimed at giving SMBs a cost-effective way to get into virtualization and cloud computing. The HyTrust Appliance Community Edition, which is designed to give users a central control point for managing and monitoring virtualized environments, also is a tool that enterprises can use to get started, according to HyTrust officials.
The free community version offers the same functionality and features as HyTrust’s Enterprise Edition, but with limitations. For example, users can only have three protected hypervisor hosts.Resource Library:
At the same time, HyTrust is kicking off an online community designed to support its vision of a more easily managed virtualized environment, to create a repository that enables users to share information and give feedback to the company, and to help HyTrust direct its R&D efforts.
The Community Edition lets larger enterprises easily evaluate the capabilities of HyTrust’s technology, and give feedback on their findings. In turn, HyTrust will be able to speed up product development and innovations, officials said.
“The potential for the HyTrust Community is unbounded,” HyTrust CEO Eric Chiu said in a statement. “We see this not only as a terrific opportunity for HyTrust to meet currently unmet needs of the market, but also as a great way for HyTrust to harness the powers of distributed peer review.”
The Community Edition is available to members of the HyTrust Community. To join, click here.
HyTrust launched the company April 7 with the Enterprise Edition, which can be bought as a 1U appliance or as software that can run on the customer’s hardware.
HyTrust currently can manage VMware environments, though it will expand its reach to the Xen hypervisor from Citrix Systems later in the year, he said. The company also is working on products to support infrastructures using Microsoft's Hyper-V technology.
Vizioncore Creates Self-Service Virtualization
Vizioncore’s vControl offers a Web interface and customizable templates that let users build and deploy virtualization machines, freeing up IT administrators from such tasks and giving them more time to manage the VMs. The product also supports multiple virtualization platforms, including those from Microsoft, VMware, Citrix and Sun.
Vizioncore is looking to bring self-service capabilities to virtualization.
Vizioncore’s new vControl management tool, announced May 5, is designed to let end users provision virtual machines themselves, offloading such tasks from IT administrators.
With end users doing the provisioning and deploying of the virtual machines, IT administrators are free to manage the VMs through a single interface provided by Vizioncore.Resource Library:
The product features a Web interface and customizable templates for end users to build and deploy virtual machines for themselves. Another interface lets administrators control multiple virtualization platforms, including VMware ESX and ESXi, Microsoft Hyper-V, Citrix Systems’ XenServer and Sun Microsystems’ Solaris Zones.
vControl also offers out-of-the-box workflows, a visual workflow editor to build new workflows, a Web services interface and an SDK (software development kit) for integration with third-party systems, all aimed at driving down administrative costs.
IT administrators can also bring high availability to unlimited numbers of virtual machines, further driving down operational costs.
vControl is available immediately, starting at $399 per socket.
Vizioncore is looking to bring self-service capabilities to virtualization.
Vizioncore’s new vControl management tool, announced May 5, is designed to let end users provision virtual machines themselves, offloading such tasks from IT administrators.
With end users doing the provisioning and deploying of the virtual machines, IT administrators are free to manage the VMs through a single interface provided by Vizioncore.Resource Library:
The product features a Web interface and customizable templates for end users to build and deploy virtual machines for themselves. Another interface lets administrators control multiple virtualization platforms, including VMware ESX and ESXi, Microsoft Hyper-V, Citrix Systems’ XenServer and Sun Microsystems’ Solaris Zones.
vControl also offers out-of-the-box workflows, a visual workflow editor to build new workflows, a Web services interface and an SDK (software development kit) for integration with third-party systems, all aimed at driving down administrative costs.
IT administrators can also bring high availability to unlimited numbers of virtual machines, further driving down operational costs.
vControl is available immediately, starting at $399 per socket.
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set
Oracle has a number of reasons to want to own a mature virtualization tool set, and acquiring Virtual Iron contributes to that goal. To become the full-service IT infrastructure company it envisions, Oracle needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a so-called virtualization tax to third-party providers such as VMware.
Oracle, a company with its own permanent mergers and acquisitions office, is adding an important ingredient to its product catalog in a quest to become the newest all-purpose IT systems company: a new-generation tool box that will administer both Windows and Linux virtualization deployments.
When it closes a deal to acquire Virtual Iron announced May 13, Oracle will join EMC (owner of VMware), Microsoft (Hyper-V), Citrix Systems (XenServer) and Sun Microsystems (Sun Containers, xVM Ops Center and VirtualBox software) as one of the only IT systems providers that own server virtualization products.
Resource Library:
After the summer of 2009, that number of companies will shrink by one, because Sun also will have become property of Oracle in the widely reported $7.4 billion acquisition deal announced April 20.
VMware products are installed on about 85 percent of all enterprise IT systems, with the others all claiming much smaller pieces of the virtualization pie.
Oracle has a number of reasons to want to own a mature virtualization tool set.
First, to become the full-service IT infrastructure company it envisions, it needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a "virtualization tax" to third-party providers like VMware or any other company.
Secondly, Oracle needs a more complete set of tools for its home-developed Xen-based hypervisor, Oracle VM. It's not an accident that Virtual Iron's platform also is Xen-based, built on open-source code. Oracle's virtual machine controls currently do not have management features as good as Virtual Iron's LivePower, which offers much greater control of server power consumption. So the acquisition also is a green IT move for Oracle.
Oracle intends to bundle Virtual Iron's tools with its own VM layer to give users a full-stack management console for both virtual and physical systems. Virtual Iron also features better capacity utilization and virtual server configuration tools than Oracle offers today.
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set - Few Independent Virtualization Companies Survive
( Page 2 of 2 )
With Virtual Iron leaving the ranks of providers of independent virtualization options, only a small number of them remain in the market, including Parallels, Debian's OpenVZ and Ubuntu Linux.
"Market consolidation seems to be upon us," Galen Schreck, an analyst with Forrester Research, told eWEEK. "Plus, Citrix's move to give away a full-featured version of XenServer makes it pretty hard to charge for this kind of functionality.
"What's a company like Virtual Iron to do? Both are Xen-based, and have pretty similar capabilities. Sure, Citrix charges extra for its most advanced management, but you get a lot of functionality for no money whatsoever. Meanwhile, VMware is the clear market leader with Microsoft being the next most popular platform in a distant second place."
Resource Library:
Virtual Iron aimed its wares mostly at the small and midsize business markets. Is Oracle making a play for the smaller markets with this acquisition?
"I don't think this acquisition is about smaller markets—it's more of an upgrade to the management capabilities of Oracle's own Xen-based hypervisor," Schreck said. "They get a better UI [user interface] as well as dynamic workload management and power management."
Schreck said it is still unclear how Oracle will handle the integration of both Sun and Virtual Iron into its catalog.
"There is definitely some overlap here," Schreck said. "Neither product has a lot of customers, so it's not a question of which has more market traction. Sun's xVM Ops Center is a nice product, but Virtual Iron is more Windows-friendly—which gives Oracle immediate access to the largest virtualization market."
'Interesting dynamic' with VMware
The Virtual Iron acquisition creates an interesting competitive dynamic with VMware, Zeus Kerravala of The Yankee Group told eWEEK.
"They're not the best of partners, but they do some work together," Kerravala said. "As for Sun, it [Virtual Iron] is a parallel offering. Oracle didn't have any way to virtualize Windows or Linux environments."
Katherine Egbert, an analyst with Jefferies & Co., said she believes the acquisition is a clear sign that Oracle wants to move deeper into the midmarket, a place it has hardly penetrated in the past.
"It is a midmarket play. Virtual Iron has lot of government and education [customers] in their installed base," Egbert said. "Oracle gets the full stack now, everything from the bare-metal hypervisor up to the highest-level user application."
Oracle has a number of reasons to want to own a mature virtualization tool set, and acquiring Virtual Iron contributes to that goal. To become the full-service IT infrastructure company it envisions, Oracle needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a so-called virtualization tax to third-party providers such as VMware.
Oracle, a company with its own permanent mergers and acquisitions office, is adding an important ingredient to its product catalog in a quest to become the newest all-purpose IT systems company: a new-generation tool box that will administer both Windows and Linux virtualization deployments.
When it closes a deal to acquire Virtual Iron announced May 13, Oracle will join EMC (owner of VMware), Microsoft (Hyper-V), Citrix Systems (XenServer) and Sun Microsystems (Sun Containers, xVM Ops Center and VirtualBox software) as one of the only IT systems providers that own server virtualization products.
Resource Library:
After the summer of 2009, that number of companies will shrink by one, because Sun also will have become property of Oracle in the widely reported $7.4 billion acquisition deal announced April 20.
VMware products are installed on about 85 percent of all enterprise IT systems, with the others all claiming much smaller pieces of the virtualization pie.
Oracle has a number of reasons to want to own a mature virtualization tool set.
First, to become the full-service IT infrastructure company it envisions, it needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a "virtualization tax" to third-party providers like VMware or any other company.
Secondly, Oracle needs a more complete set of tools for its home-developed Xen-based hypervisor, Oracle VM. It's not an accident that Virtual Iron's platform also is Xen-based, built on open-source code. Oracle's virtual machine controls currently do not have management features as good as Virtual Iron's LivePower, which offers much greater control of server power consumption. So the acquisition also is a green IT move for Oracle.
Oracle intends to bundle Virtual Iron's tools with its own VM layer to give users a full-stack management console for both virtual and physical systems. Virtual Iron also features better capacity utilization and virtual server configuration tools than Oracle offers today.
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set - Few Independent Virtualization Companies Survive
( Page 2 of 2 )
With Virtual Iron leaving the ranks of providers of independent virtualization options, only a small number of them remain in the market, including Parallels, Debian's OpenVZ and Ubuntu Linux.
"Market consolidation seems to be upon us," Galen Schreck, an analyst with Forrester Research, told eWEEK. "Plus, Citrix's move to give away a full-featured version of XenServer makes it pretty hard to charge for this kind of functionality.
"What's a company like Virtual Iron to do? Both are Xen-based, and have pretty similar capabilities. Sure, Citrix charges extra for its most advanced management, but you get a lot of functionality for no money whatsoever. Meanwhile, VMware is the clear market leader with Microsoft being the next most popular platform in a distant second place."
Resource Library:
Virtual Iron aimed its wares mostly at the small and midsize business markets. Is Oracle making a play for the smaller markets with this acquisition?
"I don't think this acquisition is about smaller markets—it's more of an upgrade to the management capabilities of Oracle's own Xen-based hypervisor," Schreck said. "They get a better UI [user interface] as well as dynamic workload management and power management."
Schreck said it is still unclear how Oracle will handle the integration of both Sun and Virtual Iron into its catalog.
"There is definitely some overlap here," Schreck said. "Neither product has a lot of customers, so it's not a question of which has more market traction. Sun's xVM Ops Center is a nice product, but Virtual Iron is more Windows-friendly—which gives Oracle immediate access to the largest virtualization market."
'Interesting dynamic' with VMware
The Virtual Iron acquisition creates an interesting competitive dynamic with VMware, Zeus Kerravala of The Yankee Group told eWEEK.
"They're not the best of partners, but they do some work together," Kerravala said. "As for Sun, it [Virtual Iron] is a parallel offering. Oracle didn't have any way to virtualize Windows or Linux environments."
Katherine Egbert, an analyst with Jefferies & Co., said she believes the acquisition is a clear sign that Oracle wants to move deeper into the midmarket, a place it has hardly penetrated in the past.
"It is a midmarket play. Virtual Iron has lot of government and education [customers] in their installed base," Egbert said. "Oracle gets the full stack now, everything from the bare-metal hypervisor up to the highest-level user application."
Windows 7's XP Mode Will Be a Desktop Virtualization Boost
Windows 7's XP Mode combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance. By tapping desktop-based virtualization as a bridge for Windows software compatibility gaps, organizations could achieve a smooth transition from Windows to a competing platform.
Last month, Microsoft announced that Windows 7 will include an XP Mode, which combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance.
When I heard about XP Mode, I was immediately struck by the marketing benefits that the feature can provide for non-Windows platforms. That's because tapping desktop-based virtualization as a bridge for Windows software compatibility gaps is one of the keys to achieving a smooth transition from Windows to a competing platform.
When someone asks me about moving away from Windows to Linux or the Mac, I tell them that they'll most likely find native Mac or Linux replacements for their Windows applications, but that it may be necessary to run a copy of Windows in a virtual machine for certain applications.
I keep a Windows VM on my Linux notebook for things like product testing and attending GoToMeeting conferences. (Microsoft's own Live Meeting is, by comparison, very Linux-friendly.) The Windows VM approach to platform-switching can work pretty well, but this tactic does have various wrinkles.Resource Library:
First, you need a licensed copy of Windows and enough RAM to devote to the Windows guest without starving your host OS. Also, you'll need the same sort of security software and patching policies you would apply to a regular Windows instance. Finally, depending on the type of application you're dealing with, performance might be an issue, and applications that require direct access to hardware resources might not work at all.
Now that Microsoft is pushing virtualization as a crutch for migrating from XP to Windows 7, it may occur to many that upgrading from XP to 7 wouldn't prove significantly more painful than moving from XP to OS X or Linux—particularly since XP Mode on Windows 7 shares most of the same wrinkles that mar XP on Linux or Mac setups.
More importantly, though, XP Mode will introduce the idea and the practice of running multiple, reasonably isolated OS instances on a single machine to a broader pool of users. As more people embrace the practice, I expect to see Microsoft and other vendors work out more of its kinks and, eventually, offer new classes of products aimed specifically at enabling these Russian doll desktop scenarios.
Despite the possibly beneficial side effects of XP Mode for alternative platforms, I believe that Microsoft and Windows are best-positioned to take advantage of the rise of the virtual desktop machines.
As eWEEK Labs has discussed recently, the lines between personal and company devices and computing environments are now more blurry than ever. As I see it, the best way to provide both individual users and large organizations with the control they require to satisfy their needs is to provide multiple virtualized environments on a single piece of hardware.
Given its advantages around available applications, integrated identity and desktop management capabilities, and mind and market share among businesses, Windows seems to be the clear option for delivering the managed corporate desktop element of these mixed environments.
XP Mode could be a first step toward colonizing the virtual desktop territories, but for something like this to really take off, Microsoft will have to begin approaching VMs as a first-class "hardware" platform and look toward stripping out bits that aren't required in these environments. Also, we'll have to see more advances in bare-metal desktop and notebook hypervisor technologies, like those demonstrated by Citrix in the form of its Project Independence.
Maybe desktop platform diversity and Microsoft monoculture can live side by side, after all. If nothing else, Microsoft would probably be less touchy about mounting "I'm a Mac" choruses if managed Windows instances lurked beneath more of Apple's matte aluminum covers.
Last month, Microsoft announced that Windows 7 will include an XP Mode, which combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance.
When I heard about XP Mode, I was immediately struck by the marketing benefits that the feature can provide for non-Windows platforms. That's because tapping desktop-based virtualization as a bridge for Windows software compatibility gaps is one of the keys to achieving a smooth transition from Windows to a competing platform.
When someone asks me about moving away from Windows to Linux or the Mac, I tell them that they'll most likely find native Mac or Linux replacements for their Windows applications, but that it may be necessary to run a copy of Windows in a virtual machine for certain applications.
I keep a Windows VM on my Linux notebook for things like product testing and attending GoToMeeting conferences. (Microsoft's own Live Meeting is, by comparison, very Linux-friendly.) The Windows VM approach to platform-switching can work pretty well, but this tactic does have various wrinkles.Resource Library:
First, you need a licensed copy of Windows and enough RAM to devote to the Windows guest without starving your host OS. Also, you'll need the same sort of security software and patching policies you would apply to a regular Windows instance. Finally, depending on the type of application you're dealing with, performance might be an issue, and applications that require direct access to hardware resources might not work at all.
Now that Microsoft is pushing virtualization as a crutch for migrating from XP to Windows 7, it may occur to many that upgrading from XP to 7 wouldn't prove significantly more painful than moving from XP to OS X or Linux—particularly since XP Mode on Windows 7 shares most of the same wrinkles that mar XP on Linux or Mac setups.
More importantly, though, XP Mode will introduce the idea and the practice of running multiple, reasonably isolated OS instances on a single machine to a broader pool of users. As more people embrace the practice, I expect to see Microsoft and other vendors work out more of its kinks and, eventually, offer new classes of products aimed specifically at enabling these Russian doll desktop scenarios.
Despite the possibly beneficial side effects of XP Mode for alternative platforms, I believe that Microsoft and Windows are best-positioned to take advantage of the rise of the virtual desktop machines.
As eWEEK Labs has discussed recently, the lines between personal and company devices and computing environments are now more blurry than ever. As I see it, the best way to provide both individual users and large organizations with the control they require to satisfy their needs is to provide multiple virtualized environments on a single piece of hardware.
Given its advantages around available applications, integrated identity and desktop management capabilities, and mind and market share among businesses, Windows seems to be the clear option for delivering the managed corporate desktop element of these mixed environments.
XP Mode could be a first step toward colonizing the virtual desktop territories, but for something like this to really take off, Microsoft will have to begin approaching VMs as a first-class "hardware" platform and look toward stripping out bits that aren't required in these environments. Also, we'll have to see more advances in bare-metal desktop and notebook hypervisor technologies, like those demonstrated by Citrix in the form of its Project Independence.
Maybe desktop platform diversity and Microsoft monoculture can live side by side, after all. If nothing else, Microsoft would probably be less touchy about mounting "I'm a Mac" choruses if managed Windows instances lurked beneath more of Apple's matte aluminum covers.
Cisco's Nexus 1000v Virtual Switch Is Poised to Push Virtualization Further, Faster
Virtualization in the enterprise is about to open up, and it's not because of VMware's new vSphere, Microsoft's Hyper-V or Cisco's Unified Computing System. The tipping point will come with the release of Cisco's 1000v virtual switch, which will open up virtualization to companies' networking groups, lowering barriers and opening new possibilities.
While I accept that x86-based server virtualization is a growing fact of life in the data center, it wasn't until I took a troubleshooting class at Interop Las Vegas in May that I fully understood why server virtualization is about to go further, faster.
The trigger isn’t virtualization giant VMware's recent release of vSphere 4, although this major platform release is fundamental to further virtualization adoption. The trigger isn’t the recognition of the improvements that Microsoft's Hyper-V and the upcoming release of Windows Server 2008 R2 will bring.
No, server virtualization is poised to go further and faster because of something Cisco is about to do—but it has almost nothing to do with that company's release of its Unified Computing System.
Cisco is wrapping up the beta tests of its Nexus 1000v virtual switch. With the release of VMware's vSphere 4, third-party switches including the Nexus 1000v can be incorporated into the virtualized data center infrastructure. The significance of this news is hard to overstate. Resource Library:
Until now, switching in VMware virtualized environments has been handled by the same people who were creating the virtual machines: the systems group. The network group was often left out of the equation of creating new systems for a number of reasons, not least of which is that there was little or no physical switching work required to bring a new virtual system online. This has meant that a fair number of systems people have been getting a crash course in switching and networking.
As long as the virtualization project was limited to test and development, this wasn't such a big deal. However, the presenters at this tuning and tweaking workshop at Interop quoted analyst figures that said virtualization has penetrated about 10 percent to 15 percent of the data center. This was borne out in an informal audience poll at the session.
With the advent of the Cisco Nexus 1000v switch, which is a fully operational switch realized entirely in software, network staffers who may have raised concerns about and implementation barriers to further server virtualization projects will be able to use the familiar Cisco command line, management tools and scripts to help push virtualization projects forward.
By reducing the friction between the system and network groups--both of which have highly specialized, differentiated and essential skills--VMware has set the stage for a wave of data center virtualization.
I believe that other network switch makers are preparing software-only versions of their wares, but none to my knowledge has been announced. And even Cisco's switch is not commercially available yet. However, making room for best-of-breed, third-party components is a step in the right direction.
For one thing, using Cisco networking infrastructure means that the trained work force ready to tune and tweak the virtual infrastructure just got a lot bigger. Networking staff with architecting and operational experience--even in the purely physical world--will be tremendously useful in creating workable virtualized data centers. And this additional expertise couldn't come a moment too soon if the content from the Interop session is on target.
According to Barb Goldworm, president and chief analyst at FOCUS storage, performance and capacity management are the No. 2 and No. 3 limiting factors in virtualization projects. Adding networking experts already familiar with Cisco tools, and using a Cisco switch that can be slotted into an existing network management system, means that IT managers can focus on storage and capacity management concerns.
While I accept that x86-based server virtualization is a growing fact of life in the data center, it wasn't until I took a troubleshooting class at Interop Las Vegas in May that I fully understood why server virtualization is about to go further, faster.
The trigger isn’t virtualization giant VMware's recent release of vSphere 4, although this major platform release is fundamental to further virtualization adoption. The trigger isn’t the recognition of the improvements that Microsoft's Hyper-V and the upcoming release of Windows Server 2008 R2 will bring.
No, server virtualization is poised to go further and faster because of something Cisco is about to do—but it has almost nothing to do with that company's release of its Unified Computing System.
Cisco is wrapping up the beta tests of its Nexus 1000v virtual switch. With the release of VMware's vSphere 4, third-party switches including the Nexus 1000v can be incorporated into the virtualized data center infrastructure. The significance of this news is hard to overstate. Resource Library:
Until now, switching in VMware virtualized environments has been handled by the same people who were creating the virtual machines: the systems group. The network group was often left out of the equation of creating new systems for a number of reasons, not least of which is that there was little or no physical switching work required to bring a new virtual system online. This has meant that a fair number of systems people have been getting a crash course in switching and networking.
As long as the virtualization project was limited to test and development, this wasn't such a big deal. However, the presenters at this tuning and tweaking workshop at Interop quoted analyst figures that said virtualization has penetrated about 10 percent to 15 percent of the data center. This was borne out in an informal audience poll at the session.
With the advent of the Cisco Nexus 1000v switch, which is a fully operational switch realized entirely in software, network staffers who may have raised concerns about and implementation barriers to further server virtualization projects will be able to use the familiar Cisco command line, management tools and scripts to help push virtualization projects forward.
By reducing the friction between the system and network groups--both of which have highly specialized, differentiated and essential skills--VMware has set the stage for a wave of data center virtualization.
I believe that other network switch makers are preparing software-only versions of their wares, but none to my knowledge has been announced. And even Cisco's switch is not commercially available yet. However, making room for best-of-breed, third-party components is a step in the right direction.
For one thing, using Cisco networking infrastructure means that the trained work force ready to tune and tweak the virtual infrastructure just got a lot bigger. Networking staff with architecting and operational experience--even in the purely physical world--will be tremendously useful in creating workable virtualized data centers. And this additional expertise couldn't come a moment too soon if the content from the Interop session is on target.
According to Barb Goldworm, president and chief analyst at FOCUS storage, performance and capacity management are the No. 2 and No. 3 limiting factors in virtualization projects. Adding networking experts already familiar with Cisco tools, and using a Cisco switch that can be slotted into an existing network management system, means that IT managers can focus on storage and capacity management concerns.
VMware vSphere 4 Raises the Virtualization Bar
VMware vSphere 4 Raises the Virtualization Bar
REVIEW: VMware vSphere 4--the renamed and upgraded VMware Infrastructure--will allow IT departments to place application workloads on the most cost-effective compute resource. With its new vNetwork Distributed Switch and support for third-party, integrated network switches, vSphere 4 removes barriers that made it difficult to implement and manage virtual machine infrastructure on a large scale. Advances made in this version of VMware's infrastructure platform also include new linked management consoles, host profiles that ease ESX Server creation and maintenance operations, and enhanced virtual machine performance monitors.
VMware has changed the name of its flagship VMware Infrastructure to VMware vSphere 4, and in the process has added new switching and management features that raise the bar for x86 data center virtualization technology.
The VMware marketing team has been working overtime to promote vSphere 4 as the first cloud operating system. IT managers can safely set aside this breathless chatter and focus on the fact that vSphere will allow IT departments to place application workloads on the most cost-effective compute resource.
vSphere 4 puts VMware well ahead of the virtualization pack. Click here for images.
With its new vNetwork Distributed Switch and support for third-party, integrated network switches—including the forthcoming Cisco Nexus 1000v—vSphere 4 removes barriers that made it difficult to implement and manage virtual machine infrastructure on a large scale. Resource Library:
The advances made in this version of VMware's infrastructure platform also include new linked management consoles, host profiles that ease ESX Server creation and maintenance operations, and enhanced virtual machine performance monitors. These new capabilities place vSphere 4 well ahead of Microsoft's Hyper-V platform and open-source projects based on the Xen hypervisor, and earn the new VMware platform an eWEEK Labs Analyst's Choice award.
Eyes and Ears of the Platform
The eyes and ears of vSphere 4 is the significantly updated VirtualCenter, now called vCenter Server 4.0. vCenter Server still runs on a Windows-based system, which can be either a physical or virtual machine. Large installations will need to also provide access to either a Microsoft SQL Server system or an Oracle database system to store and organize server data.
vCenter Server provides a very handy search-based navigation function that enabled me during tests to quickly find virtual machines, physical hosts and other inventory objects based on a wide variety of criteria. For example, I was able to find physical hosts using more than 10 different characteristics, including power state and virtual machine properties. This is a good tool for quickly locating unused virtual machines and, for IT managers in large networks, is in itself a compelling reason to consider vSphere 4.
In addition to making it significantly easier to monitor and manage virtual machines, vSphere 4—with the vNetwork Distributed Switch—has taken a big step forward in easing the management burden of virtual networks.
Until now, a standard virtual network switch was created and managed on each ESX Server system. Using the vNetwork Distributed Switch, I was able to create virtual switches that spanned multiple ESX hosts.
For large VMware installations, it is hard to overstate the importance of this advance. The time savings in avoiding per-ESX switch configuration changes alone will likely be significant.
vSphere 4 also allows the integration of third-party distributed switches.
The Cisco Nexus 1000v, which is at the end of its beta cycle, is the first announced switch in this category. If the Nexus 1000v fulfills its promise, it will usher a significant talent pool of Cisco-trained network engineers into the world of server virtualization. This would likely relieve system engineers who have been doing double duty with virtual machine and virtual network tasks, while adding some much-needed network architecture experts to the data center virtualization mix.
IT managers can access multiple vCenter Servers from the vSphere Client interface. During tests, this allowed me to see and manage virtual machines and network switches on all of my vCenter Servers installed in the lab. I also linked these vCenter Servers together, another new function, which enabled me to share administrative roles.
This is a good example of the management features included in vSphere 4 that should help preserve the cost-savings that have been realized from server consolidation projects.
Performance Monitors
Tests at eWEEK Labs showed that VMware has succeeded in bolstering physical and virtual machine performance monitors. Some of these changes are as simple as the addition of an "overview" button to the host performance tab that shows a variety of system measures—such as CPU, memory, disk and network utilization—in charts simultaneously.Resource Library:
It's now much easier to move among performance charts by clicking on thumbnails to get detailed information about components on individual data centers, clusters or hosts. And another nice touch is the addition of context-sensitive information that is a button-click away from each data chart.
It's good to see VMware exposing the performance data in this way. IT managers who have extensive nonvirtualized systems may want to look at third-party tools from companies such as BMC that integrate virtual and physical-only system management to get a complete picture of data center performance.
In my tests, I was able to spend only a few minutes with the vCenter Orchestrator, which is a workflow automation tool. As I build out the vSphere test infrastructure, I'll be reporting on how Orchestrator works in managing the deployment and configuration of systems in the VMware infrastructure.
REVIEW: VMware vSphere 4--the renamed and upgraded VMware Infrastructure--will allow IT departments to place application workloads on the most cost-effective compute resource. With its new vNetwork Distributed Switch and support for third-party, integrated network switches, vSphere 4 removes barriers that made it difficult to implement and manage virtual machine infrastructure on a large scale. Advances made in this version of VMware's infrastructure platform also include new linked management consoles, host profiles that ease ESX Server creation and maintenance operations, and enhanced virtual machine performance monitors.
VMware has changed the name of its flagship VMware Infrastructure to VMware vSphere 4, and in the process has added new switching and management features that raise the bar for x86 data center virtualization technology.
The VMware marketing team has been working overtime to promote vSphere 4 as the first cloud operating system. IT managers can safely set aside this breathless chatter and focus on the fact that vSphere will allow IT departments to place application workloads on the most cost-effective compute resource.
vSphere 4 puts VMware well ahead of the virtualization pack. Click here for images.
With its new vNetwork Distributed Switch and support for third-party, integrated network switches—including the forthcoming Cisco Nexus 1000v—vSphere 4 removes barriers that made it difficult to implement and manage virtual machine infrastructure on a large scale. Resource Library:
The advances made in this version of VMware's infrastructure platform also include new linked management consoles, host profiles that ease ESX Server creation and maintenance operations, and enhanced virtual machine performance monitors. These new capabilities place vSphere 4 well ahead of Microsoft's Hyper-V platform and open-source projects based on the Xen hypervisor, and earn the new VMware platform an eWEEK Labs Analyst's Choice award.
Eyes and Ears of the Platform
The eyes and ears of vSphere 4 is the significantly updated VirtualCenter, now called vCenter Server 4.0. vCenter Server still runs on a Windows-based system, which can be either a physical or virtual machine. Large installations will need to also provide access to either a Microsoft SQL Server system or an Oracle database system to store and organize server data.
vCenter Server provides a very handy search-based navigation function that enabled me during tests to quickly find virtual machines, physical hosts and other inventory objects based on a wide variety of criteria. For example, I was able to find physical hosts using more than 10 different characteristics, including power state and virtual machine properties. This is a good tool for quickly locating unused virtual machines and, for IT managers in large networks, is in itself a compelling reason to consider vSphere 4.
In addition to making it significantly easier to monitor and manage virtual machines, vSphere 4—with the vNetwork Distributed Switch—has taken a big step forward in easing the management burden of virtual networks.
Until now, a standard virtual network switch was created and managed on each ESX Server system. Using the vNetwork Distributed Switch, I was able to create virtual switches that spanned multiple ESX hosts.
For large VMware installations, it is hard to overstate the importance of this advance. The time savings in avoiding per-ESX switch configuration changes alone will likely be significant.
vSphere 4 also allows the integration of third-party distributed switches.
The Cisco Nexus 1000v, which is at the end of its beta cycle, is the first announced switch in this category. If the Nexus 1000v fulfills its promise, it will usher a significant talent pool of Cisco-trained network engineers into the world of server virtualization. This would likely relieve system engineers who have been doing double duty with virtual machine and virtual network tasks, while adding some much-needed network architecture experts to the data center virtualization mix.
IT managers can access multiple vCenter Servers from the vSphere Client interface. During tests, this allowed me to see and manage virtual machines and network switches on all of my vCenter Servers installed in the lab. I also linked these vCenter Servers together, another new function, which enabled me to share administrative roles.
This is a good example of the management features included in vSphere 4 that should help preserve the cost-savings that have been realized from server consolidation projects.
Performance Monitors
Tests at eWEEK Labs showed that VMware has succeeded in bolstering physical and virtual machine performance monitors. Some of these changes are as simple as the addition of an "overview" button to the host performance tab that shows a variety of system measures—such as CPU, memory, disk and network utilization—in charts simultaneously.Resource Library:
It's now much easier to move among performance charts by clicking on thumbnails to get detailed information about components on individual data centers, clusters or hosts. And another nice touch is the addition of context-sensitive information that is a button-click away from each data chart.
It's good to see VMware exposing the performance data in this way. IT managers who have extensive nonvirtualized systems may want to look at third-party tools from companies such as BMC that integrate virtual and physical-only system management to get a complete picture of data center performance.
In my tests, I was able to spend only a few minutes with the vCenter Orchestrator, which is a workflow automation tool. As I build out the vSphere test infrastructure, I'll be reporting on how Orchestrator works in managing the deployment and configuration of systems in the VMware infrastructure.
VMware Virtual Desktops Find Homes in Hospitals
VMware is looking at the health care industry as one of the key areas as it looks to grow its VMware View virtualized desktop environment. Several hospitals say they’ve been able to save money, reduce power consumption, and make it easier for doctors and nurses to get around by using VMware View. VMware is finding itself in an increasingly competitive space, with vendors like Microsoft and Citrix looking to grab larger shares of the desktop virtualization space.
VMware officials are touting the attention their desktop virtualization technology is getting from hospitals, which are using it to enable health care providers to more easily move from one patient to another and to streamline the tasks of upgrading applications and protecting data.
VMware June 9 highlighted the work administrators at Norton Healthcare, St. Vincent's Catholic Hospital and Riverside HealthCare are doing with VMware’s View, which enables users to run virtual desktops on central servers in the data center that can be accessed from any thick- or thin-client device.
The health care providers can see their own desktop environment, complete with all the clinical applications they need, and can quickly access medical and patient information.
"With VMware View, our physicians can go to a thin client, log in, access a patient list and then walk down the hall to another thin client, and their patient list would be right where they left it," Brian Cox, director of customer service for Norton Healthcare, said in a statement. "The staff recognized the benefit of that capability immediately." Resource Library:
VMware launched the latest version of its VDI (virtual desktop infrastructure) in December 2008, and included a number of new tools, including View Composer, which enables better management of storage resources in a VDI environment. Another new feature is Offline Desktop, which lets users work on their virtual desktops while offline and then synchronizes the new information when the user goes back online.
VMware is in an increasingly competitive space, with rivals like Citrix Systems and Microsoft looking for better traction in the virtualized desktop arena.
The health care field, with its need for doctor mobility and to protect sensitive patient data, is an area that VMware is targeting. The mobility aspect was a key issue for Riverside Medical Center, in suburban Chicago, according to Wayne Kelsheimer, corporate director of information services.
Kelsheimer said the facility initially looked at a virtualized desktop environment to avoid a hardware refresh and to make it easier to roll out applications. Making it easier for health care providers to more easily move around also has been an advantage.
“Our nurses are able to go up to any workstation or mobile medical cart and get their same desktop on any device,” he said in a statement. “We were also able to repurpose some of our existing desktop devices into thin clients, leveraging the investments we had already made in this equipment."
St. Vincent's Catholic Medical Center in New York saw a way to save money and reduce power consumption by going to a virtualized environment.
"With VMware View, we are able to move to a 'zero footprint' device, reduce power consumption and provide our emergency staff department an always on and available desktop," Kane Edupuganti, director of IT operations and communications for the hospital, said in a statement. "We plan on continuing desktop virtualization across nearly 5,000 endpoints in order to maximize ROI in areas outside of IT."
VMware officials are touting the attention their desktop virtualization technology is getting from hospitals, which are using it to enable health care providers to more easily move from one patient to another and to streamline the tasks of upgrading applications and protecting data.
VMware June 9 highlighted the work administrators at Norton Healthcare, St. Vincent's Catholic Hospital and Riverside HealthCare are doing with VMware’s View, which enables users to run virtual desktops on central servers in the data center that can be accessed from any thick- or thin-client device.
The health care providers can see their own desktop environment, complete with all the clinical applications they need, and can quickly access medical and patient information.
"With VMware View, our physicians can go to a thin client, log in, access a patient list and then walk down the hall to another thin client, and their patient list would be right where they left it," Brian Cox, director of customer service for Norton Healthcare, said in a statement. "The staff recognized the benefit of that capability immediately." Resource Library:
VMware launched the latest version of its VDI (virtual desktop infrastructure) in December 2008, and included a number of new tools, including View Composer, which enables better management of storage resources in a VDI environment. Another new feature is Offline Desktop, which lets users work on their virtual desktops while offline and then synchronizes the new information when the user goes back online.
VMware is in an increasingly competitive space, with rivals like Citrix Systems and Microsoft looking for better traction in the virtualized desktop arena.
The health care field, with its need for doctor mobility and to protect sensitive patient data, is an area that VMware is targeting. The mobility aspect was a key issue for Riverside Medical Center, in suburban Chicago, according to Wayne Kelsheimer, corporate director of information services.
Kelsheimer said the facility initially looked at a virtualized desktop environment to avoid a hardware refresh and to make it easier to roll out applications. Making it easier for health care providers to more easily move around also has been an advantage.
“Our nurses are able to go up to any workstation or mobile medical cart and get their same desktop on any device,” he said in a statement. “We were also able to repurpose some of our existing desktop devices into thin clients, leveraging the investments we had already made in this equipment."
St. Vincent's Catholic Medical Center in New York saw a way to save money and reduce power consumption by going to a virtualized environment.
"With VMware View, we are able to move to a 'zero footprint' device, reduce power consumption and provide our emergency staff department an always on and available desktop," Kane Edupuganti, director of IT operations and communications for the hospital, said in a statement. "We plan on continuing desktop virtualization across nearly 5,000 endpoints in order to maximize ROI in areas outside of IT."
HP Launches New, Smaller-Size ProLiant 'Scale-Out' Servers
The ProLiant SL6000 product line includes a smaller, physically lightweight, power-draw-efficient modular systems architecture -- the first major rebuild of the ProLiant server since 2001. They can be deployed with up to 672 processor cores and 10 terabytes of storage capacity per standard 42U rack.
Hewlett-Packard on June 10 launched a new line of ProLiant servers, called the ProLiant SL Extreme Scale-Out portfolio, engineered specifically for the growing Web 2.0, financial services and high-performance computing markets.
"Scale-out" is a relatively recent data center industry buzzword referring to architectures for systems running thousands of servers that are required to scale nearly ad infinitum in order to comfortably handle a massive number of online users. Amazon, Facebook, eBay and Google are Web 2.0 companies specializing in both the deployment and the optimization of scale-out architecture.
The ProLiant SL6000 product line -- which HP is also calling ExSO --includes a smaller, physically lightweight, power-draw-efficient modular systems architecture -- the first major rebuild of the ProLiant server since 2001, John Gromala, director of product marketing for HP's industry-standard server group, told eWEEK.
They are also powerful. These new servers can be deployed with up to 672 processor cores and 10 terabytes of storage capacity per standard 42U rack, HP said. Like all HP data center products, the ProLiant SLs are built on industry standards, so they are designed to work in a mix-and-match, storage-and-computing data center environment.
Resource Library:
"This is a high-level launch, purpose-built for extreme-scale users with 1,000-plus [data center] nodes," Gromala said.
"We're talking about a cross-section of high-performance computing in Web 2.0 companies, scientific modeling, financials, and health care -- and at a second level, gaming. To a certain extent, this is almost like comparing a restaurant versus your home kitchen. What occurs in those two places is very different; your home is like a small business, the restaurant is an enterprise."
The ProLiant SLs use a new, smaller-size architecture that replaces the traditional chassis- and rack-form factors with a lightweight rail-and-tray design. They utilize new, cooler-running Intel quad-core processors.
The servers, which slide into place on a regular-width (19-inch) rack, are physically smaller and lighter -- about two-thirds the weight of a regular ProLiant server -- and take up less room and are cooler-running than previous models.
"The HP ProLiant SL offers pioneering customers the most significant design innovation since the blade form factor," said Christine Reischl, HP's senior vice president and general manager, Industry Standard Servers.
The ProLiant SL servers use less power form the wall due to a consolidated power/cooling infrastructure and unique air flow design; the savings has been estimated at about 28 percent less power per server than standard rack-based servers, Gromala said.
The new servers are designed to work in modular configurations to enable fast installation and deployment through hot-swappable "compute trays."
"[Extreme-scale] customers have very distinct and unique data center requirements, specifically around energy efficiency, cost and time to market," Michelle Bailey, research vice president at IDC, said.
"The introduction of the ExSO portfolio specifically addresses customer requirements for optimizing capitol expenditures while lowering ongoing operating costs. As a result, these solutions are helping to redefine data center economics."
HP also announced new data center control software, called Datacenter Environmental Edge, that provides visual mapping of environmental variables, so administrators can identify and take action on data center issues.
Environmental Edge uses a system of wireless sensors placed throughout a data center to monitor a variety of variables, including temperature, humidity, air pressure and power utilization. The system provides real-time visualization of environmental variables so administrators can perform root cause analysis.
HP's existing scale-out computing portfolio includes the ProLiant DL1000 Multi Node servers, introduced on June 2; the HP POD (Performance Optimized Datacenter), HP StorageWorks 9100 Extreme Data Storage System and the HP ProLiant 2x220c double-density blade server introduced last year.
Hewlett-Packard on June 10 launched a new line of ProLiant servers, called the ProLiant SL Extreme Scale-Out portfolio, engineered specifically for the growing Web 2.0, financial services and high-performance computing markets.
"Scale-out" is a relatively recent data center industry buzzword referring to architectures for systems running thousands of servers that are required to scale nearly ad infinitum in order to comfortably handle a massive number of online users. Amazon, Facebook, eBay and Google are Web 2.0 companies specializing in both the deployment and the optimization of scale-out architecture.
The ProLiant SL6000 product line -- which HP is also calling ExSO --includes a smaller, physically lightweight, power-draw-efficient modular systems architecture -- the first major rebuild of the ProLiant server since 2001, John Gromala, director of product marketing for HP's industry-standard server group, told eWEEK.
They are also powerful. These new servers can be deployed with up to 672 processor cores and 10 terabytes of storage capacity per standard 42U rack, HP said. Like all HP data center products, the ProLiant SLs are built on industry standards, so they are designed to work in a mix-and-match, storage-and-computing data center environment.
Resource Library:
"This is a high-level launch, purpose-built for extreme-scale users with 1,000-plus [data center] nodes," Gromala said.
"We're talking about a cross-section of high-performance computing in Web 2.0 companies, scientific modeling, financials, and health care -- and at a second level, gaming. To a certain extent, this is almost like comparing a restaurant versus your home kitchen. What occurs in those two places is very different; your home is like a small business, the restaurant is an enterprise."
The ProLiant SLs use a new, smaller-size architecture that replaces the traditional chassis- and rack-form factors with a lightweight rail-and-tray design. They utilize new, cooler-running Intel quad-core processors.
The servers, which slide into place on a regular-width (19-inch) rack, are physically smaller and lighter -- about two-thirds the weight of a regular ProLiant server -- and take up less room and are cooler-running than previous models.
"The HP ProLiant SL offers pioneering customers the most significant design innovation since the blade form factor," said Christine Reischl, HP's senior vice president and general manager, Industry Standard Servers.
The ProLiant SL servers use less power form the wall due to a consolidated power/cooling infrastructure and unique air flow design; the savings has been estimated at about 28 percent less power per server than standard rack-based servers, Gromala said.
The new servers are designed to work in modular configurations to enable fast installation and deployment through hot-swappable "compute trays."
"[Extreme-scale] customers have very distinct and unique data center requirements, specifically around energy efficiency, cost and time to market," Michelle Bailey, research vice president at IDC, said.
"The introduction of the ExSO portfolio specifically addresses customer requirements for optimizing capitol expenditures while lowering ongoing operating costs. As a result, these solutions are helping to redefine data center economics."
HP also announced new data center control software, called Datacenter Environmental Edge, that provides visual mapping of environmental variables, so administrators can identify and take action on data center issues.
Environmental Edge uses a system of wireless sensors placed throughout a data center to monitor a variety of variables, including temperature, humidity, air pressure and power utilization. The system provides real-time visualization of environmental variables so administrators can perform root cause analysis.
HP's existing scale-out computing portfolio includes the ProLiant DL1000 Multi Node servers, introduced on June 2; the HP POD (Performance Optimized Datacenter), HP StorageWorks 9100 Extreme Data Storage System and the HP ProLiant 2x220c double-density blade server introduced last year.
VMware Marketplace Is Important Piece of Virtualization Puzzle
VMware has significantly improved its VMware Marketplace, but there are many improvements that could be made to ease/drive virtual appliance acquisition, implementation and support.
In the first of a series of reviews I'm writing about VMware vSphere 4, I focused on important new features such as the vNetwork Distributed Switch and improved management tools.
As a product reviewer, that's my job—to focus on the product. But as an industry analyst, one of the big changes at VMware that caught my eye was the drastic improvements made to the VMware Marketplace for virtual appliances.
One of the beautiful things about virtualization is the ability to create virtual appliances that wrap the operating system, application, disk and other configuration choices into a neat, isolated bundle.
I like virtual appliances because they significantly reduce application installation and distribution costs: Because the virtual appliance is already installed when it gets to you, you don't have to go through the expensive, one-time step-up process. And thanks to some standardization work that I'll touch on in a moment, virtual appliances are relatively cheap to move around in a virtual data center, even among VMware, Hyper-V and Xen-based virtualization environments.Resource Library:
The change in the VMware virtual appliance marketplace is twofold.
First, it has been integrated into the vCenter Server interface, which makes it simple to access the virtual appliances. Second, the marketplace has been almost completely revamped. Links to products actually lead to a virtual appliance that can be downloaded within the VMware domain. In the previous version of the marketplace, I was more often than not taken to dead ends on the appliance makers' Websites. For those of you who were soured on the idea of exploring virtual appliances, this new marketplace should change your attitude.
How about implementation? Virtual appliances are like starter kits; they are really meant to be used as a low-cost (to you, the IT manager) way to quickly get a taste for what this or that virtual appliance can do in your environment. To actually deploy production-level versions of these products—which range from firewalls and intrusion detection systems to capacity planning and VM performance management tools—you'll be spending some time creating your own virtual appliances that are tweaked to work perfectly in your data center.
Installing your new virtual appliance, will, in most cases, be much easier because of the DMTF (Distributed Management Task Force).
All of the virtual appliances I looked at in the VMware marketplace are provided in an OVF (Open Virtualization Format) package. In this method, all aspects of the virtual machine, or multiple virtual machines running together, are described. This means that the CPU, memory and disk, along with all other virtual hardware requirements, are provided with the virtual appliance. With a little practice, most IT managers will be able to deploy virtual appliances in OVF packages with little or no manual intervention.
To keep the marketplace interesting for IT managers, VMware needs to make sure that product offerings are kept up-to-date. Further, it would be nice to see support, maintenance, advice and user group links added to each of the products. Even if these links lead off to vendor or community-supported sites, it would be convenient for potential customers to see these service links right next to the offered product.
And it's not too soon for VMware to add product lifecycle management to the marketplace. A "new products" highlight area for recently added virtual appliances could be joined by a "staying power" category that features tried-and-true performers. This kind of product differentiation could be provided by a third party. But for premium data center products, I'd rather get this kind of information from the company that is demonstrating the operational ability to make it happen, and that's VMware.
In the first of a series of reviews I'm writing about VMware vSphere 4, I focused on important new features such as the vNetwork Distributed Switch and improved management tools.
As a product reviewer, that's my job—to focus on the product. But as an industry analyst, one of the big changes at VMware that caught my eye was the drastic improvements made to the VMware Marketplace for virtual appliances.
One of the beautiful things about virtualization is the ability to create virtual appliances that wrap the operating system, application, disk and other configuration choices into a neat, isolated bundle.
I like virtual appliances because they significantly reduce application installation and distribution costs: Because the virtual appliance is already installed when it gets to you, you don't have to go through the expensive, one-time step-up process. And thanks to some standardization work that I'll touch on in a moment, virtual appliances are relatively cheap to move around in a virtual data center, even among VMware, Hyper-V and Xen-based virtualization environments.Resource Library:
The change in the VMware virtual appliance marketplace is twofold.
First, it has been integrated into the vCenter Server interface, which makes it simple to access the virtual appliances. Second, the marketplace has been almost completely revamped. Links to products actually lead to a virtual appliance that can be downloaded within the VMware domain. In the previous version of the marketplace, I was more often than not taken to dead ends on the appliance makers' Websites. For those of you who were soured on the idea of exploring virtual appliances, this new marketplace should change your attitude.
How about implementation? Virtual appliances are like starter kits; they are really meant to be used as a low-cost (to you, the IT manager) way to quickly get a taste for what this or that virtual appliance can do in your environment. To actually deploy production-level versions of these products—which range from firewalls and intrusion detection systems to capacity planning and VM performance management tools—you'll be spending some time creating your own virtual appliances that are tweaked to work perfectly in your data center.
Installing your new virtual appliance, will, in most cases, be much easier because of the DMTF (Distributed Management Task Force).
All of the virtual appliances I looked at in the VMware marketplace are provided in an OVF (Open Virtualization Format) package. In this method, all aspects of the virtual machine, or multiple virtual machines running together, are described. This means that the CPU, memory and disk, along with all other virtual hardware requirements, are provided with the virtual appliance. With a little practice, most IT managers will be able to deploy virtual appliances in OVF packages with little or no manual intervention.
To keep the marketplace interesting for IT managers, VMware needs to make sure that product offerings are kept up-to-date. Further, it would be nice to see support, maintenance, advice and user group links added to each of the products. Even if these links lead off to vendor or community-supported sites, it would be convenient for potential customers to see these service links right next to the offered product.
And it's not too soon for VMware to add product lifecycle management to the marketplace. A "new products" highlight area for recently added virtual appliances could be joined by a "staying power" category that features tried-and-true performers. This kind of product differentiation could be provided by a third party. But for premium data center products, I'd rather get this kind of information from the company that is demonstrating the operational ability to make it happen, and that's VMware.
Tuesday, June 2, 2009
AMD Introduces Phenom II, Athlon II Dual-Core Processors
To its Athlon and Phenom processor lines AMD has added the Athlon II X2, its fastest Athlon processor, and the Phenom II X2 550 Black Edition, its fastest dual-core yet. Both focus on energy and pricing and offer benefits with Microsoft Windows 7, which AMD suggests puts it a step ahead of competitor Intel.
As the Computex 2009 show kicked off in Taipei, Advanced Micro Devices announced it is expanding its Athlon and Phenom processor lines with the introduction of the Athlon II X2 250 and Phenom II X2 550 Black Edition dual-core processors.
Built on 45-nanometer technology, Athlon II X2 is AMD’s fastest Athlon processor yet, with a core speed of 3.0GHz and a 65-watt thermal envelope. An AM3 package enables it to support DDR2 (double data rate 2) as well as DDR3 memory.
Recommended pricing for the Athlon II X2 will be $87, and AMD expects this to be “the bulk of its infantry,” very much appealing to the mainstream.
Resource Library:
“Consumers are really trying to get the best deals possible right now, make smarter decisions, and one of the things on the top of their minds right now is value,” Brent Barry, an AMD brand manager, told eWEEK.
“Not everyone has their use case suited for triple- or quad-core processors. For a lot of people, dual-core can get it done, and especially with the Athlon II we’re introducing, it’s going to give an incredible boost of performance and efficiency to that price point with dual-core products.”
The 3.1GHz Phenom II X2 550 Black Edition, conversely, is for the enthusiast wanting high-end performance but still value. It has an HT Link of 2.0GHz, a 7MB cache and an AM3 package that’s also memory compatible with DDR2 and DDR3.
The Phenom II X2 is priced at $103. “It’s our fastest ever dual-core processor,” said Barry. “If you look at how we position our X4, we say, ‘It’s the power to do it all.’ This really is making that power more affordable.”
Both processors are Energy Star-compliant with PowerNow 3.0 technology, which includes Cool’n’Quiet 3.0 technology. With PowerNow, the system adjusts its energy use to the tasks at hand, enabling the Athlon II, for example, to run at 65 watts during a demanding application, and down to 3.5 watts at its lowest idle state.
Barry said AMD is particularly excited about the upcoming release of Microsoft Windows 7, explaining that all the Athlon and Phenom processors have a virtualization technology that Windows 7 will highlight.
“Windows 7 has something called Windows XP Mode, and basically this virtualization technology enables you to create a virtualized PC inside of another PC. So in order to maintain compatibility with really old hardware or old applications, or things that wouldn’t run on Windows or Vista 7 … this gives you the ability [to run Windows 7, virtualized].”
This capability, along with the AMD processors’ backward compatibility, enables AMD to offer businesses and consumers the flexibility, said Barry, to upgrade only when the time is right for them. He also offered it as an example of how AMD differentiates itself from rival Intel.
“It’s hard to know if your Intel CPU is going to be compatible with virtualization, whereas you can buy with confidence, knowing an AMD processor is going to provide that backward compatibility,” he explained.
Intel has said it offers XP Mode in products targeted toward business customers, and that it works closely with Microsoft to ensure compatibility. For example, Intel says it has already shared information with Microsoft about “Sandy Bridge,” the architecture that will replace Nehalem in 2010.
As the Computex 2009 show kicked off in Taipei, Advanced Micro Devices announced it is expanding its Athlon and Phenom processor lines with the introduction of the Athlon II X2 250 and Phenom II X2 550 Black Edition dual-core processors.
Built on 45-nanometer technology, Athlon II X2 is AMD’s fastest Athlon processor yet, with a core speed of 3.0GHz and a 65-watt thermal envelope. An AM3 package enables it to support DDR2 (double data rate 2) as well as DDR3 memory.
Recommended pricing for the Athlon II X2 will be $87, and AMD expects this to be “the bulk of its infantry,” very much appealing to the mainstream.
Resource Library:
“Consumers are really trying to get the best deals possible right now, make smarter decisions, and one of the things on the top of their minds right now is value,” Brent Barry, an AMD brand manager, told eWEEK.
“Not everyone has their use case suited for triple- or quad-core processors. For a lot of people, dual-core can get it done, and especially with the Athlon II we’re introducing, it’s going to give an incredible boost of performance and efficiency to that price point with dual-core products.”
The 3.1GHz Phenom II X2 550 Black Edition, conversely, is for the enthusiast wanting high-end performance but still value. It has an HT Link of 2.0GHz, a 7MB cache and an AM3 package that’s also memory compatible with DDR2 and DDR3.
The Phenom II X2 is priced at $103. “It’s our fastest ever dual-core processor,” said Barry. “If you look at how we position our X4, we say, ‘It’s the power to do it all.’ This really is making that power more affordable.”
Both processors are Energy Star-compliant with PowerNow 3.0 technology, which includes Cool’n’Quiet 3.0 technology. With PowerNow, the system adjusts its energy use to the tasks at hand, enabling the Athlon II, for example, to run at 65 watts during a demanding application, and down to 3.5 watts at its lowest idle state.
Barry said AMD is particularly excited about the upcoming release of Microsoft Windows 7, explaining that all the Athlon and Phenom processors have a virtualization technology that Windows 7 will highlight.
“Windows 7 has something called Windows XP Mode, and basically this virtualization technology enables you to create a virtualized PC inside of another PC. So in order to maintain compatibility with really old hardware or old applications, or things that wouldn’t run on Windows or Vista 7 … this gives you the ability [to run Windows 7, virtualized].”
This capability, along with the AMD processors’ backward compatibility, enables AMD to offer businesses and consumers the flexibility, said Barry, to upgrade only when the time is right for them. He also offered it as an example of how AMD differentiates itself from rival Intel.
“It’s hard to know if your Intel CPU is going to be compatible with virtualization, whereas you can buy with confidence, knowing an AMD processor is going to provide that backward compatibility,” he explained.
Intel has said it offers XP Mode in products targeted toward business customers, and that it works closely with Microsoft to ensure compatibility. For example, Intel says it has already shared information with Microsoft about “Sandy Bridge,” the architecture that will replace Nehalem in 2010.
CA Acquires Cassatt Assets, Bulks Up Cloud Capabilities
CA is expanding its cloud computing capabilities by acquiring assets from struggling Cassatt, a six-year-old company started by former BEA CEO Bill Coleman that was an early pioneer in what would become cloud computing, but which had fallen on hard times in recent years. The Cassatt deal will add to CA’s Lean IT initiative and give it more expertise in data center infrastructure management and automation.
CA is expanding its cloud computing capabilities by buying assets from troubled Cassatt, a company started by high-profile Silicon Valley executive Bill Coleman about six years ago to build software to help enterprises manage distributed computing environments.
Terms of the deal, announced June 2, were not disclosed.
Ajei Gopal, executive vice president of CA’s Products and Technology Group, said the addition of Cassatt’s technologies—as well as several executives, engineers, developers and patents—will add to the company’s portfolio of data center management software and its Lean IT initiative. Lean IT is designed to help businesses lower their IT costs and improve efficiencies by increasing data center automation and optimization capabilities.
“With the addition of Cassatt’s engineering team and advanced data center automation assets, CA will accelerate its development of software that helps customers make more intelligent, business policy-based decisions,” Gopal said in a statement.
Coleman’s vision was a precursor of the current trends in cloud computing and data center convergence, which are becoming key areas of competition for such top-tier companies as IBM, Dell, Hewlett-Packard, Cisco Systems, Sun Microsystems, Novell and VMware.
Oracle also could become a major player in this area if its proposed $7.4 billion acquisition of Sun goes through. That deal is expected to close this summer.
Meanwhile, many of those players, as well as companies such as Amazon.com and Google, are pushing cloud computing—both public and internal environments—as a way of helping businesses reduce data center capital and operating costs while increasing flexibility and agility.
A key part of these trends is management software initiatives from various vendors—including CA and Cassatt—to handle the rising complexity created by such computing environments.
Coleman, a former Sun executive and founder of BEA Systems, was able to attract some top-line talent to Cassatt, including such people as Richard Green, another longtime Sun executive who has reached the level of vice president of Sun developer platforms and Java software when he joined Cassatt in 2004.
However, Cassatt apparently ran through more than $100 million over those six years and Coleman, Cassatt’s CEO, said in an interview with Forbes.com in April that while some companies showed interest in the company’s Cassatt Active Response software, few had actually followed through on buying it.
In that interview, Coleman said Cassatt had reached a point where it had to be sold or would go into bankruptcy. He said he had been looking for a buyer for several months and that there had been interest, though he declined to say from which companies. Reports had mentioned Google and Amazon.com as having early interest, though that interest eventually waned.
Now Cassatt is part of CA. Rob Gingell, Cassatt’s executive vice president of product development and CTO, and Steve Oberlin, a co-founder of Cassatt and chief scientist, both will join CA.
In a statement, Coleman applauded the deal with CA.
“Cassatt has long been a champion for using a cloud-style architecture to manage data centers like a ‘compute utility,’” Coleman said. “This is a great move for both organizations because of the vision we share—delivering a new, dramatically more efficient way to run data centers.”
CA Chief Architect Donald Ferguson said the combination of Cassatt’s analysis and optimization technologies with CA’s automation capabilities with give CA a more comprehensive infrastructure management offering.
CA is expanding its cloud computing capabilities by buying assets from troubled Cassatt, a company started by high-profile Silicon Valley executive Bill Coleman about six years ago to build software to help enterprises manage distributed computing environments.
Terms of the deal, announced June 2, were not disclosed.
Ajei Gopal, executive vice president of CA’s Products and Technology Group, said the addition of Cassatt’s technologies—as well as several executives, engineers, developers and patents—will add to the company’s portfolio of data center management software and its Lean IT initiative. Lean IT is designed to help businesses lower their IT costs and improve efficiencies by increasing data center automation and optimization capabilities.
“With the addition of Cassatt’s engineering team and advanced data center automation assets, CA will accelerate its development of software that helps customers make more intelligent, business policy-based decisions,” Gopal said in a statement.
Coleman’s vision was a precursor of the current trends in cloud computing and data center convergence, which are becoming key areas of competition for such top-tier companies as IBM, Dell, Hewlett-Packard, Cisco Systems, Sun Microsystems, Novell and VMware.
Oracle also could become a major player in this area if its proposed $7.4 billion acquisition of Sun goes through. That deal is expected to close this summer.
Meanwhile, many of those players, as well as companies such as Amazon.com and Google, are pushing cloud computing—both public and internal environments—as a way of helping businesses reduce data center capital and operating costs while increasing flexibility and agility.
A key part of these trends is management software initiatives from various vendors—including CA and Cassatt—to handle the rising complexity created by such computing environments.
Coleman, a former Sun executive and founder of BEA Systems, was able to attract some top-line talent to Cassatt, including such people as Richard Green, another longtime Sun executive who has reached the level of vice president of Sun developer platforms and Java software when he joined Cassatt in 2004.
However, Cassatt apparently ran through more than $100 million over those six years and Coleman, Cassatt’s CEO, said in an interview with Forbes.com in April that while some companies showed interest in the company’s Cassatt Active Response software, few had actually followed through on buying it.
In that interview, Coleman said Cassatt had reached a point where it had to be sold or would go into bankruptcy. He said he had been looking for a buyer for several months and that there had been interest, though he declined to say from which companies. Reports had mentioned Google and Amazon.com as having early interest, though that interest eventually waned.
Now Cassatt is part of CA. Rob Gingell, Cassatt’s executive vice president of product development and CTO, and Steve Oberlin, a co-founder of Cassatt and chief scientist, both will join CA.
In a statement, Coleman applauded the deal with CA.
“Cassatt has long been a champion for using a cloud-style architecture to manage data centers like a ‘compute utility,’” Coleman said. “This is a great move for both organizations because of the vision we share—delivering a new, dramatically more efficient way to run data centers.”
CA Chief Architect Donald Ferguson said the combination of Cassatt’s analysis and optimization technologies with CA’s automation capabilities with give CA a more comprehensive infrastructure management offering.
Sun Launches Cloud Services Portfolio
Sun Microsystems, which is in the process of creating its own public cloud offering, is rolling out a host of services designed to help businesses evaluate their readiness for cloud computing and then to help them build a road map to reach that goal. The cloud computing services are part of Sun’s $1 billion professional services business. The moves come as Oracle prepares to buy Sun for $7.4 billion in a deal expected to close this summer.
As Sun Microsystems continues to work on its upcoming public cloud computing platform, the company is beginning to roll out services around the burgeoning technology trend.
At the 2009 CommunityWest conference June 1, Sun is unveiling Sun Cloud Strategic Planning Service, a host of services offerings designed to help businesses make the move into cloud computing.
While the new services are complementary to the company’s planned Sun Cloud offering—which is scheduled for launch later this year—Sun will work with whatever technology is best for its customers, Amy O’Connor, vice president of services marketing for Sun, said in an interview.
“Sun Cloud is one option in a number of options customers will face,” O’Connor said.
public and private—and create a road map for making that happen.
Businesses seem to understand what cloud computing can do for them, but need help in figuring out how to get there and what is involved, O’Connor said. That is where Sun’s new services offerings come in, she said.
“You want to … jump on the [cloud computing] bandwagon and hope it takes you with it, but there’s a lot of hard work that goes on underneath,” she said.
Everything from virtualization to applications to hardware must be evaluated before a business can start laying out its plans for cloud computing, O’Connor said.
The new cloud services are part of Sun’s $1 billion professional services offerings.
Sun is making an aggressive push into cloud computing with its upcoming Sun Cloud, which officials said will be based on open-source technology, including MySQL, Glassfish and the ZFS file system. It also will be built atop technology Sun acquired in January when it bought Q-layer, an infrastructure management company that has technology that automates the deployment and management of public and private cloud environments.
In a blog post in March, just as Sun announced the Sun Cloud idea, President and CEO Jonathan Schwartz said that the APIs and file formats also will be open, and that Sun’s offering will not only operate as a public cloud but also can be used by enterprises as an internal cloud behind their firewalls.
“We recognize that workloads subject to fiduciary duty or regulatory scrutiny won't move to public clouds,” Schwartz wrote. “If you can't move to the cloud, we'll move the cloud to you.”
However, that was before Oracle announced in April its intention to buy Sun for $7.4 billion, and it’s unclear how the acquisition will impact Sun’s cloud plans.
Industry observers are expecting big things from cloud computing. Gartner analysts in March said global cloud services revenue could move beyond $56.3 billion this year—from $46.4 billion in 2008—and grow to $150.1 billion in 2013. IDC was more tempered in its projections, calling for worldwide spending on cloud services to reach $42 billion by 2012.
As Sun Microsystems continues to work on its upcoming public cloud computing platform, the company is beginning to roll out services around the burgeoning technology trend.
At the 2009 CommunityWest conference June 1, Sun is unveiling Sun Cloud Strategic Planning Service, a host of services offerings designed to help businesses make the move into cloud computing.
While the new services are complementary to the company’s planned Sun Cloud offering—which is scheduled for launch later this year—Sun will work with whatever technology is best for its customers, Amy O’Connor, vice president of services marketing for Sun, said in an interview.
“Sun Cloud is one option in a number of options customers will face,” O’Connor said.
public and private—and create a road map for making that happen.
Businesses seem to understand what cloud computing can do for them, but need help in figuring out how to get there and what is involved, O’Connor said. That is where Sun’s new services offerings come in, she said.
“You want to … jump on the [cloud computing] bandwagon and hope it takes you with it, but there’s a lot of hard work that goes on underneath,” she said.
Everything from virtualization to applications to hardware must be evaluated before a business can start laying out its plans for cloud computing, O’Connor said.
The new cloud services are part of Sun’s $1 billion professional services offerings.
Sun is making an aggressive push into cloud computing with its upcoming Sun Cloud, which officials said will be based on open-source technology, including MySQL, Glassfish and the ZFS file system. It also will be built atop technology Sun acquired in January when it bought Q-layer, an infrastructure management company that has technology that automates the deployment and management of public and private cloud environments.
In a blog post in March, just as Sun announced the Sun Cloud idea, President and CEO Jonathan Schwartz said that the APIs and file formats also will be open, and that Sun’s offering will not only operate as a public cloud but also can be used by enterprises as an internal cloud behind their firewalls.
“We recognize that workloads subject to fiduciary duty or regulatory scrutiny won't move to public clouds,” Schwartz wrote. “If you can't move to the cloud, we'll move the cloud to you.”
However, that was before Oracle announced in April its intention to buy Sun for $7.4 billion, and it’s unclear how the acquisition will impact Sun’s cloud plans.
Industry observers are expecting big things from cloud computing. Gartner analysts in March said global cloud services revenue could move beyond $56.3 billion this year—from $46.4 billion in 2008—and grow to $150.1 billion in 2013. IDC was more tempered in its projections, calling for worldwide spending on cloud services to reach $42 billion by 2012.
HP Rolls Out a Bevy of New Storage Systems for SMBs
UPDATED: The latest rollout includes the new HP StorageWorks X1000 and X3000 Network Storage Systems, which combine file and application storage. The other new storage systems in the launch are the StorageWorks 2000i and 2000sa G2 Modular Smart Arrays, which feature 2.5-inch, small form-factor drives that increase storage capacity and reduce power draw, thanks to cooler-running, multicore processors.
Hewlett-Packard, continuing to brand itself as the go-to supplier of data storage systems for small and midrange businesses, on May 28 launched a bevy of new unified computing and storage-related products aimed at those growing markets.
Unified storage systems combine both storage and application deployment through a single set of controls. Cisco Systems made news back on March 16 by announcing its own unified computing initiative, which is scheduled for product release later this year.
In April 2007, when HP realized it was losing market share to IBM, EMC and other storage makers in the high-end enterprise ECB (external controller-based) disk storage market, it made the decision to focus more sharply on the SMB and midrange markets. Its response at that time was the All-in-One storage system for SMBs, which did well to boost HP's overall storage reputation.
HP's newest strategy, called Total Care for SMBs, includes not only storage but infrastructure offerings for virtualization, remote access and consolidation system packages.
The latest rollout includes the new HP StorageWorks X1000 and X3000 Network Storage Systems, which combine file and application storage.
"With a single, unified system, SMBs do not need to invest in siloed storage systems for file and database data," Lee Johns, HP director of marketing for unified computing, told eWEEK. "The X1000 and X3000 are self-contained systems; they have iSCSI and file serving with an automated storage manager, which makes it very easy for an SMB to configure.
"A user can basically just request extra capacity—for example, it can add an extra 100MB for an application like [Microsoft] Exchange. The system will just go off and configure what's needed, carve up the LUNs automatically, and get it deployed."
The network-attached X1000 and X3000 systems also come with HP's file deduplication, which is provided by its OEM partner, Sepaton. This produces up to 35 percent more usable capacity, Johns said.
That's a conservative estimate. Data deduplication, when used correctly, has been known to provide up to anywhere from 30 percent to 80 percent more usable capacity, depending on the type of data stored.
The other new storage systems in the launch are the StorageWorks 2000i and 2000sa G2 Modular Smart Arrays, which feature 2.5-inch, small form-factor drives that increase storage capacity and reduce power draw, thanks to cooler-running, multicore processors, Johns said.
Pricing has been lowered, also. HP is offering a starter system consisting of a ProLiant server, 1.2TB of StorageWorks storage, VMware ESX hypervisor and all the other software licenses for about $6,000. A year ago, the cost for that system would have been 50 to 60 percent higher.
HP also launched a new SMB package called Virtualization Bundle, which recasts existing data center server disk drives into more efficient shared virtualized storage. The package includes an HP ProLiant server, storage and networking software, the VMware ESX hypervisor, and a simplified interface that doesn't have to be utilized by a highly trained technical staff person.
HP announced a new service for SMBs and remote corporate offices called Secure Remote Access, which uses Citrix XenApp Fundamentals to enable mobile employees to stay connected with necessary business applications, regardless of their location or connectivity device.
As part of this new initiative, HP is offering SMB channel partners discounted servers and storage through its SmartBuy program. The company also is offering customers and partners a free trial of Citrix XenApp Fundamentals.
Hewlett-Packard, continuing to brand itself as the go-to supplier of data storage systems for small and midrange businesses, on May 28 launched a bevy of new unified computing and storage-related products aimed at those growing markets.
Unified storage systems combine both storage and application deployment through a single set of controls. Cisco Systems made news back on March 16 by announcing its own unified computing initiative, which is scheduled for product release later this year.
In April 2007, when HP realized it was losing market share to IBM, EMC and other storage makers in the high-end enterprise ECB (external controller-based) disk storage market, it made the decision to focus more sharply on the SMB and midrange markets. Its response at that time was the All-in-One storage system for SMBs, which did well to boost HP's overall storage reputation.
HP's newest strategy, called Total Care for SMBs, includes not only storage but infrastructure offerings for virtualization, remote access and consolidation system packages.
The latest rollout includes the new HP StorageWorks X1000 and X3000 Network Storage Systems, which combine file and application storage.
"With a single, unified system, SMBs do not need to invest in siloed storage systems for file and database data," Lee Johns, HP director of marketing for unified computing, told eWEEK. "The X1000 and X3000 are self-contained systems; they have iSCSI and file serving with an automated storage manager, which makes it very easy for an SMB to configure.
"A user can basically just request extra capacity—for example, it can add an extra 100MB for an application like [Microsoft] Exchange. The system will just go off and configure what's needed, carve up the LUNs automatically, and get it deployed."
The network-attached X1000 and X3000 systems also come with HP's file deduplication, which is provided by its OEM partner, Sepaton. This produces up to 35 percent more usable capacity, Johns said.
That's a conservative estimate. Data deduplication, when used correctly, has been known to provide up to anywhere from 30 percent to 80 percent more usable capacity, depending on the type of data stored.
The other new storage systems in the launch are the StorageWorks 2000i and 2000sa G2 Modular Smart Arrays, which feature 2.5-inch, small form-factor drives that increase storage capacity and reduce power draw, thanks to cooler-running, multicore processors, Johns said.
Pricing has been lowered, also. HP is offering a starter system consisting of a ProLiant server, 1.2TB of StorageWorks storage, VMware ESX hypervisor and all the other software licenses for about $6,000. A year ago, the cost for that system would have been 50 to 60 percent higher.
HP also launched a new SMB package called Virtualization Bundle, which recasts existing data center server disk drives into more efficient shared virtualized storage. The package includes an HP ProLiant server, storage and networking software, the VMware ESX hypervisor, and a simplified interface that doesn't have to be utilized by a highly trained technical staff person.
HP announced a new service for SMBs and remote corporate offices called Secure Remote Access, which uses Citrix XenApp Fundamentals to enable mobile employees to stay connected with necessary business applications, regardless of their location or connectivity device.
As part of this new initiative, HP is offering SMB channel partners discounted servers and storage through its SmartBuy program. The company also is offering customers and partners a free trial of Citrix XenApp Fundamentals.
AMD, Intel Eye High End with New Server Chips
AMD, Intel Eye High End with New Server Chips
With AMD almost ready to launch the six-core Istanbul chip and Intel talking about its upcoming eight-core Nehalem EX processor, the two rivals are on a collision course as they compete for the high end of the x86 server market. AMD officials say Istanbul will touch on the two-, four- and eight-socket spaces, and is coming out six months ahead of schedule. Nehalem EX will start appearing in systems in early 2010. But the high end could prove to be fertile ground for Intel and AMD, as many enterprises are looking to move away from the RISC/Itanium/mainframe space, according to one analyst.
Intel and Advanced Micro Devices are both aiming at the high end of the server space with their upcoming processors.
Intel officials on May 26 outlined details of its eight-core “Nehalem EX” Xeon MP processor aimed at servers with four or more sockets. Boyd Davis, general manager of Intel’s server platforms marketing group, said during a press conference that the chip—which will start shipping to OEMs later this year and appear in systems in early 2010—will give enterprises an alternative to RISC-based environments.
Now comes AMD with the launch of its six-core “Istanbul” Opteron chip, which officials say will compete not only with Intel’s Xeon 5500 Series “Nehalem EP” in the two-socket space, but also with Nehalem EX in the four- and eight-socket arena.
And, they said, it is about ready to go now, a good half-year before Nehalem EX and months before it was initially scheduled to ship. The chip is expected to launch the week of June 1, and most top-tier OEMs are expected to roll out Istanbul-powered systems.
That’s a big deal not only to OEMs and end users, but to AMD itself, Pat Patla, vice president and general manager of AMD’s server and workstation division, said in an interview.
“Yes, AMD did not execute, and we had some issues bringing it to market,” Patla said.
“It” was “Barcelona,” the company’s first quad-core Opteron that was hampered by technical problems and delays. However, AMD changed the processes used to develop chips—for example, putting one engineer in charge of the entire process, as well as the creation of Centers of Excellence centered around particular areas of engineering expertise—and the result was the next Opteron chip, “Shanghai,” came in months ahead of schedule.
Raghuram Tupuri was the lead architect for Shanghai, and Steven Hesley was the lead for Istanbul.
With Istanbul, AMD officials first decided in March 2008 to put it on the product road map to meet demand from OEMs and end users, and within 15 months is ready to ship, Patla said.
With AMD almost ready to launch the six-core Istanbul chip and Intel talking about its upcoming eight-core Nehalem EX processor, the two rivals are on a collision course as they compete for the high end of the x86 server market. AMD officials say Istanbul will touch on the two-, four- and eight-socket spaces, and is coming out six months ahead of schedule. Nehalem EX will start appearing in systems in early 2010. But the high end could prove to be fertile ground for Intel and AMD, as many enterprises are looking to move away from the RISC/Itanium/mainframe space, according to one analyst.
Intel and Advanced Micro Devices are both aiming at the high end of the server space with their upcoming processors.
Intel officials on May 26 outlined details of its eight-core “Nehalem EX” Xeon MP processor aimed at servers with four or more sockets. Boyd Davis, general manager of Intel’s server platforms marketing group, said during a press conference that the chip—which will start shipping to OEMs later this year and appear in systems in early 2010—will give enterprises an alternative to RISC-based environments.
Now comes AMD with the launch of its six-core “Istanbul” Opteron chip, which officials say will compete not only with Intel’s Xeon 5500 Series “Nehalem EP” in the two-socket space, but also with Nehalem EX in the four- and eight-socket arena.
And, they said, it is about ready to go now, a good half-year before Nehalem EX and months before it was initially scheduled to ship. The chip is expected to launch the week of June 1, and most top-tier OEMs are expected to roll out Istanbul-powered systems.
That’s a big deal not only to OEMs and end users, but to AMD itself, Pat Patla, vice president and general manager of AMD’s server and workstation division, said in an interview.
“Yes, AMD did not execute, and we had some issues bringing it to market,” Patla said.
“It” was “Barcelona,” the company’s first quad-core Opteron that was hampered by technical problems and delays. However, AMD changed the processes used to develop chips—for example, putting one engineer in charge of the entire process, as well as the creation of Centers of Excellence centered around particular areas of engineering expertise—and the result was the next Opteron chip, “Shanghai,” came in months ahead of schedule.
Raghuram Tupuri was the lead architect for Shanghai, and Steven Hesley was the lead for Istanbul.
With Istanbul, AMD officials first decided in March 2008 to put it on the product road map to meet demand from OEMs and end users, and within 15 months is ready to ship, Patla said.
EMC Adds Server Management Controls with Configuresoft Acquisition
EMC Adds Server Management Controls with Configuresoft Acquisition
Storage and data protection infrastructure giant EMC completes a trifecta of data center management with the acquisition of OEM partner Configuresoft, a provider of server configuration, change and compliance management software.
EMC on May 27 demonstrated once again that automation is the definitive trend in data centers in 2009.
The storage and data protection infrastructure giant added a key component to its data center control software catalog when it announced the acquisition of OEM partner Configuresoft, a provider of server configuration, change and compliance management software.
Financial details of the transaction were not made public. EMC said the deal for the privately held company is expected to close in June and will not materially affect its multibillion-dollar balance sheet.
Configuresoft, based in Colorado Springs, Colo., claims to have about 400 customers worldwide, including 13 of the world's 25 largest companies. The company was founded in 1999 by E. Alexander Goldstein, Dennis Moreau, Louis Woodhill and Alan Sage and named Fundamental Software. It was renamed in 2001.
Configuresoft provides automated and optimized server management software that speeds up the adoption of virtualization, monitors policy and security compliance, and aids GRC (governance, risk management and compliance) across IT system infrastructures.
EMC has contracted with Configuresoft since mid-2008 to provide most of these features under the labels of EMC Server Configuration Manager and EMC Configuration Analytics Manager.
EMC's goal of providing a complete data center automation package is now reached, Bob Quillin, EMC's senior director of product development, told eWEEK.
"We have been building out a pretty formidable array of data center automation tools," Quillin said. "We've had excellent automated storage management products for a long time, and we just announced Storage Configuration Advisor as a new product. Two years ago we acquired Voyence, which provides automated network configuration. Configuresoft focuses on the server, which was the one big puzzle piece that was missing.
"We now have completed that trifecta of storage, network and server for a whole data center automation package."
Configuresoft's own Enterprise Configuration Manager and Configuration Intelligence Analytics will continue to be known as EMC Server Configuration Manager and Configuration Analytics Manager, based upon the OEM agreement, Quillin said.
These tools help IT administrators detect, prioritize and correct configuration compliance issues, Quillin said, in keeping with such legislation as the Sarbanes-Oxley Act, HIPAA (Health Insurance Portability and Accountability Act) and PCI (payment card industry) regulations. Rich analytics are provided in one dashboard for viewing KPIs (Key Performance Indicators).
Storage and data protection infrastructure giant EMC completes a trifecta of data center management with the acquisition of OEM partner Configuresoft, a provider of server configuration, change and compliance management software.
EMC on May 27 demonstrated once again that automation is the definitive trend in data centers in 2009.
The storage and data protection infrastructure giant added a key component to its data center control software catalog when it announced the acquisition of OEM partner Configuresoft, a provider of server configuration, change and compliance management software.
Financial details of the transaction were not made public. EMC said the deal for the privately held company is expected to close in June and will not materially affect its multibillion-dollar balance sheet.
Configuresoft, based in Colorado Springs, Colo., claims to have about 400 customers worldwide, including 13 of the world's 25 largest companies. The company was founded in 1999 by E. Alexander Goldstein, Dennis Moreau, Louis Woodhill and Alan Sage and named Fundamental Software. It was renamed in 2001.
Configuresoft provides automated and optimized server management software that speeds up the adoption of virtualization, monitors policy and security compliance, and aids GRC (governance, risk management and compliance) across IT system infrastructures.
EMC has contracted with Configuresoft since mid-2008 to provide most of these features under the labels of EMC Server Configuration Manager and EMC Configuration Analytics Manager.
EMC's goal of providing a complete data center automation package is now reached, Bob Quillin, EMC's senior director of product development, told eWEEK.
"We have been building out a pretty formidable array of data center automation tools," Quillin said. "We've had excellent automated storage management products for a long time, and we just announced Storage Configuration Advisor as a new product. Two years ago we acquired Voyence, which provides automated network configuration. Configuresoft focuses on the server, which was the one big puzzle piece that was missing.
"We now have completed that trifecta of storage, network and server for a whole data center automation package."
Configuresoft's own Enterprise Configuration Manager and Configuration Intelligence Analytics will continue to be known as EMC Server Configuration Manager and Configuration Analytics Manager, based upon the OEM agreement, Quillin said.
These tools help IT administrators detect, prioritize and correct configuration compliance issues, Quillin said, in keeping with such legislation as the Sarbanes-Oxley Act, HIPAA (Health Insurance Portability and Accountability Act) and PCI (payment card industry) regulations. Rich analytics are provided in one dashboard for viewing KPIs (Key Performance Indicators).
Cisco Beta Tests Its UCS Box
Cisco Systems has been running a Unified Computing System machine in its own data center since before the March announcement of the heavily touted converged data center product. The system runs several applications, including the Website for CEO John Chambers. The UCS boxes also are being planned as the backbone for not only Cisco’s current data centers, but also a new 10,000-square-foot facility being planned, as well as Cisco’s CITIES internal cloud computing initiative.
Cisco Systems has been one of its best beta customers for its widely hyped Unified Computing System data center technology, deploying the technology in one of its own data centers to run several applications, according to company officials.
Cisco also plans to replace the bulk of the x86 servers currently in their 52 data centers worldwide with the UCS technology—codenamed “California”—over the next two years, according to the officials.
The UCS, announced by Cisco in March, is an all-in-one offering that brings together computing, networking, storage and software management into a single offering. It includes not only Cisco server and networking products, but also offerings from partners such as VMware and its vSphere platform for virtualization, EMC for storage and BMC Software for management software.
The product, which is not yet shipping, includes Cisco blade servers powered by Intel’s quad-core Xeon 5500 Series “Nehalem EP” processors and networking technologies such as FCoE (Fibre Channel over Ethernet) and 10 Gigabit Ethernet. Cisco offers the FCoE fabric in its Nexus 7000 switches.
Cisco’s strategy is part of an overall trend in the data center—fueled by such technologies as virtualization, cloud computing and Web 2.0 applications—to converge resources. For example, Hewlett-Packard in April rolled out such an offering with its BladeSystem Matrix that integrates server, storage, networking and software into a single product.
Speaking on a Webcast May 27, two Cisco data center officials said their goal is not only to incorporate the UCS into their own data centers, but—like any beta tester—to use their experiences with the technology to give informed feedback to Cisco’s engineers.
The UCS will be a key building block for their own data centers going forward, enabling them to scale their infrastructures while driving down power and operating costs, they said.
“We’ve very much integrated it into our current models,” said Chris Hynes, director of IS for Cisco’s Network and Data Center Services group.
Cisco has been running the UCS product in its Mountain View, Calif., data center since mid-March, before the company announced the offering. The systems currently are running the site for Chairman and CEO John Chambers, Cisco’s public relations site and a host of legal and financial applications, according to John Manville, vice president of IT at Cisco.
The UCS product is running a host of virtual machines as well as an Oracle database that has not been virtualized.
Cisco also is using a new 10,000-square-foot data center it’s planning as a platform for UCS, Hynes said. The 1 megawatt facility will run UCS boxes, saving the company money in a number of areas. A traditional data center of that size would need 135 server racks, with 4,320 Ethernet cables and 2,160 copper cables, he said. With the UCS, all that will be reduced to 72 racks, 1,008 Ethernet links and 300 copper links.
Hynes said the reduction in cabling is important because it frees up space and resources to add more computing power to the data center. It also saves money, he said. Cabling in the traditional data center would cost $2.7 million; with the UCS, that drops to $1.6 million, he said.
The number of physical servers that can fit into facility grows from 720 hosting up to 7,500 virtual machines in the traditional data center to as many as 1,400 servers hosting 12,000 to 14,000 VMs with the UCS technology.
Virtualization technology will continue to be a key part of what Cisco has planned for all of its data centers, which cover about 215,000 square feet throughout the 52 facilities, Manville said. Currently, about 30 percent of all the servers are virtualized, he said. Over the next two years, the goal is to increase that to 70 to 80 percent.
With all these capabilities coming into the data centers, Cisco officials also are in the process of creating an internal cloud computing environment which will be based in large part on UCS. Dubbed CITIES—for Cisco IT Elastic Infrastructure Services—the goal of the project is to bring as many applications as possible onto the internal cloud, Manville said.
Eventually Cisco will move to a hybrid model, moving applications and workloads between the internal cloud and public clouds depending on need, he said. The first phase—called “Mist”—will launch in August or September, he said.
Cisco Systems has been one of its best beta customers for its widely hyped Unified Computing System data center technology, deploying the technology in one of its own data centers to run several applications, according to company officials.
Cisco also plans to replace the bulk of the x86 servers currently in their 52 data centers worldwide with the UCS technology—codenamed “California”—over the next two years, according to the officials.
The UCS, announced by Cisco in March, is an all-in-one offering that brings together computing, networking, storage and software management into a single offering. It includes not only Cisco server and networking products, but also offerings from partners such as VMware and its vSphere platform for virtualization, EMC for storage and BMC Software for management software.
The product, which is not yet shipping, includes Cisco blade servers powered by Intel’s quad-core Xeon 5500 Series “Nehalem EP” processors and networking technologies such as FCoE (Fibre Channel over Ethernet) and 10 Gigabit Ethernet. Cisco offers the FCoE fabric in its Nexus 7000 switches.
Cisco’s strategy is part of an overall trend in the data center—fueled by such technologies as virtualization, cloud computing and Web 2.0 applications—to converge resources. For example, Hewlett-Packard in April rolled out such an offering with its BladeSystem Matrix that integrates server, storage, networking and software into a single product.
Speaking on a Webcast May 27, two Cisco data center officials said their goal is not only to incorporate the UCS into their own data centers, but—like any beta tester—to use their experiences with the technology to give informed feedback to Cisco’s engineers.
The UCS will be a key building block for their own data centers going forward, enabling them to scale their infrastructures while driving down power and operating costs, they said.
“We’ve very much integrated it into our current models,” said Chris Hynes, director of IS for Cisco’s Network and Data Center Services group.
Cisco has been running the UCS product in its Mountain View, Calif., data center since mid-March, before the company announced the offering. The systems currently are running the site for Chairman and CEO John Chambers, Cisco’s public relations site and a host of legal and financial applications, according to John Manville, vice president of IT at Cisco.
The UCS product is running a host of virtual machines as well as an Oracle database that has not been virtualized.
Cisco also is using a new 10,000-square-foot data center it’s planning as a platform for UCS, Hynes said. The 1 megawatt facility will run UCS boxes, saving the company money in a number of areas. A traditional data center of that size would need 135 server racks, with 4,320 Ethernet cables and 2,160 copper cables, he said. With the UCS, all that will be reduced to 72 racks, 1,008 Ethernet links and 300 copper links.
Hynes said the reduction in cabling is important because it frees up space and resources to add more computing power to the data center. It also saves money, he said. Cabling in the traditional data center would cost $2.7 million; with the UCS, that drops to $1.6 million, he said.
The number of physical servers that can fit into facility grows from 720 hosting up to 7,500 virtual machines in the traditional data center to as many as 1,400 servers hosting 12,000 to 14,000 VMs with the UCS technology.
Virtualization technology will continue to be a key part of what Cisco has planned for all of its data centers, which cover about 215,000 square feet throughout the 52 facilities, Manville said. Currently, about 30 percent of all the servers are virtualized, he said. Over the next two years, the goal is to increase that to 70 to 80 percent.
With all these capabilities coming into the data centers, Cisco officials also are in the process of creating an internal cloud computing environment which will be based in large part on UCS. Dubbed CITIES—for Cisco IT Elastic Infrastructure Services—the goal of the project is to bring as many applications as possible onto the internal cloud, Manville said.
Eventually Cisco will move to a hybrid model, moving applications and workloads between the internal cloud and public clouds depending on need, he said. The first phase—called “Mist”—will launch in August or September, he said.
HP, IBM Atop of Declining Server Market: IDC
The global server market saw revenue and shipment declines across all major segments during the first quarter as enterprises delayed system purchases in light of the crushing worldwide recession, according to IDC. The x86 server space was particularly hard hit, though analysts don’t expect that to continue much longer. HP and IBM tied for the lead in server revenue market share, though all the top vendors saw double-digit revenue declines.
The worldwide server market continued to get hammered in the first quarter of 2009, with the volume x86 space bearing the brunt of the global recession.
According to numbers released by research firm IDC May 28, overall server revenue in the quarter fell 24.5 percent over the same period in 2008, while shipments declined 26.5 percent.
All of the top server vendors also saw double-digit server revenue drops, and each segment within the market also declined. IDC analysts attributed the declines to businesses pulling away from normal server refresh cycles and new IT projects, opting instead to hang on to the systems they already have.
Hewlett-Packard and IBM shared the lead, each with 29.3 percent of market share. Dell was third with 11 percent share, followed by Sun Microsystems and Fujitsu, at 10.3 percent and 6.7 percent, respectively. All experienced revenue declines of between 18.8 percent and 31.2 percent.
“Market conditions worsened in all geographic regions during the first quarter as customers of all types pulled back on both new strategic IT projects and ongoing infrastructure refresh initiatives,” IDC analyst Matt Eastwood said in a statement. “Most enterprise organizations are deferring new IT procurements and instead focusing on extending server lifecycles and improving existing asset utilization.”
Eastwood said such a strategy was smart in the short term, but predicted that server demand would pick up in the second half of 2009, with businesses buying systems in anticipation of the expected recovery beginning in 2010.
The quarter was particularly hard on the x86 server market, where revenues declined 28.8 percent, to $5.1 billion—the lowest since the third quarter of 2003—and shipments dropped 26.3 percent, to 1.4 million servers. Analyst Dan Harrington said it was easier for businesses to delay purchases of the x86 volume server than of RISC- or CISC-based systems—which tend to run more mission-critical workloads—but he didn’t expect the trend to last.
“IDC expects x86 systems to rebound faster than the overall market in the coming quarters,” Harrington said in a statement.
Similarly, blade server revenue and shipments also declined, by 14.4 percent and 18.1 percent, respectively, though it did grow its share of the overall server revenue as businesses continued to look for technology that help reduce costs and increase efficiencies.
While non-x86 segments also experienced revenue declines, their shares of the overall server market rose. For example, while Unix server revenue declined 17.5 percent, the $3.3 billion was 33.1 percent of the overall revenue spending, compared with 30.2 percent in the same quarter last year.
Analyst Jean Bozman said the delaying of server purchases hurt the Unix server space as well, but the increase in market share was due in part to the presence of midrange and high-end Unix servers, which tend to have higher average sales prices than their x86 counterparts.
IBM’s mainframe business also got a boost. According to IDC’s numbers, IBM’s System z servers running the z/OS operating system outperformed the overall market for the fifth consecutive quarter. Revenues declined 18.9 percent, but accounted for 9 percent of all server revenue, the largest percentage for the System z in five years.
IBM and other vendors—such as CA, BMC Software and Unisys—have been working to modernize the mainframe platforms by making them easier to deploy and manage, and enabling them to handle workloads such as Linux and Java applications.
The worldwide server market continued to get hammered in the first quarter of 2009, with the volume x86 space bearing the brunt of the global recession.
According to numbers released by research firm IDC May 28, overall server revenue in the quarter fell 24.5 percent over the same period in 2008, while shipments declined 26.5 percent.
All of the top server vendors also saw double-digit server revenue drops, and each segment within the market also declined. IDC analysts attributed the declines to businesses pulling away from normal server refresh cycles and new IT projects, opting instead to hang on to the systems they already have.
Hewlett-Packard and IBM shared the lead, each with 29.3 percent of market share. Dell was third with 11 percent share, followed by Sun Microsystems and Fujitsu, at 10.3 percent and 6.7 percent, respectively. All experienced revenue declines of between 18.8 percent and 31.2 percent.
“Market conditions worsened in all geographic regions during the first quarter as customers of all types pulled back on both new strategic IT projects and ongoing infrastructure refresh initiatives,” IDC analyst Matt Eastwood said in a statement. “Most enterprise organizations are deferring new IT procurements and instead focusing on extending server lifecycles and improving existing asset utilization.”
Eastwood said such a strategy was smart in the short term, but predicted that server demand would pick up in the second half of 2009, with businesses buying systems in anticipation of the expected recovery beginning in 2010.
The quarter was particularly hard on the x86 server market, where revenues declined 28.8 percent, to $5.1 billion—the lowest since the third quarter of 2003—and shipments dropped 26.3 percent, to 1.4 million servers. Analyst Dan Harrington said it was easier for businesses to delay purchases of the x86 volume server than of RISC- or CISC-based systems—which tend to run more mission-critical workloads—but he didn’t expect the trend to last.
“IDC expects x86 systems to rebound faster than the overall market in the coming quarters,” Harrington said in a statement.
Similarly, blade server revenue and shipments also declined, by 14.4 percent and 18.1 percent, respectively, though it did grow its share of the overall server revenue as businesses continued to look for technology that help reduce costs and increase efficiencies.
While non-x86 segments also experienced revenue declines, their shares of the overall server market rose. For example, while Unix server revenue declined 17.5 percent, the $3.3 billion was 33.1 percent of the overall revenue spending, compared with 30.2 percent in the same quarter last year.
Analyst Jean Bozman said the delaying of server purchases hurt the Unix server space as well, but the increase in market share was due in part to the presence of midrange and high-end Unix servers, which tend to have higher average sales prices than their x86 counterparts.
IBM’s mainframe business also got a boost. According to IDC’s numbers, IBM’s System z servers running the z/OS operating system outperformed the overall market for the fifth consecutive quarter. Revenues declined 18.9 percent, but accounted for 9 percent of all server revenue, the largest percentage for the System z in five years.
IBM and other vendors—such as CA, BMC Software and Unisys—have been working to modernize the mainframe platforms by making them easier to deploy and manage, and enabling them to handle workloads such as Linux and Java applications.
Dell, Microsoft Integrate Infrastructure Management Products
Dell and Microsoft are integrating their infrastructure management suites as a way of giving Dell customers easier management and deployments capabilities for both physical and virtualized environments. In addition, the integrations of Dell’s OpenManage offerings and Microsoft’s System Center suite will help enterprises and SMBs control power consumption and cooling costs in their data centers, according to Dell officials.
Dell is integrating its OpenManage systems management offerings with Microsoft’s System Center suite in a move designed to help enterprises and SMBs more easily handle their infrastructures.
The integrated offerings, announced May 27, touch on everything from client systems to servers to virtualization capabilities, and give users—particularly small and midsize businesses—management capabilities they have been pushing for, according to Dell officials.
Enrico Bracalente, senior strategist for product marketing at Dell, said the company’s SMB customers—those with up to 500 client systems and 20 servers—are looking for simplicity.
“They want something that’s easy to use, they want something that’s easy to install, and they want something that’s easy to manage,” Bracalente said in an interview. “They want to get up and running easily and without any complications.”
They also want a single console through which they can monitor, deploy and update their infrastructure resources, he said.
To that end, Dell is offering midsized enterprises Microsoft’s System Center Essentials management suite—which includes Microsoft System Center Essentials 2007 and Microsoft System Center Virtual Machine Manager 2008—as a packaged solution with Dell’s PRO-Pack (Performance and Resource Optimization Pack) and its own Management Packs that will make it easier for businesses to centrally manage both physical and virtualized environments.
Dell also is offering consulting services around Microsoft’s Hyper-V virtualization technology. The company’s Hyper-V Technology Introduction and FastTrack Design services with Advanced Management options are aimed at helping businesses more quickly deploy and configure Hyper-V infrastructures. In addition, Dell and Microsoft offer reference configurations to help businesses speed up the implementation of Windows Server 2008, Hyper-V and System Center offerings.
Dell’s Infrastructure Consulting members help in the design and implementation of all Microsoft infrastructure.
In addition, the integration of Dell Management Packs with Microsoft’s System Center Operations Manager 2007 and System Center Essentials 2007 enables users to monitor and manage Dell products—from servers and client systems to storage and printers—in multivendor hardware and software environments.
Dell’s Server and Business Client Hardware Update Catalog tools also integrate with Microsoft’s management software—including System Center Configuration Manager 2007, Essentials 2007 and Windows Server Update Service—to make sure that drivers, BIOS and firmware are automatically updated.
The two vendors also are addressing power and cooling costs by giving users the ability to monitor and control the energy efficiency of hardware and software, and Dell’s Server Deployment Pack is integrated with Microsoft’s System Center Configuration Manager to automate the configuration and deployment of Dell’s PowerEdge servers and blade systems.
“We wanted to make this a one-stop shop” for customers, Bracalente said.
Dell is integrating its OpenManage systems management offerings with Microsoft’s System Center suite in a move designed to help enterprises and SMBs more easily handle their infrastructures.
The integrated offerings, announced May 27, touch on everything from client systems to servers to virtualization capabilities, and give users—particularly small and midsize businesses—management capabilities they have been pushing for, according to Dell officials.
Enrico Bracalente, senior strategist for product marketing at Dell, said the company’s SMB customers—those with up to 500 client systems and 20 servers—are looking for simplicity.
“They want something that’s easy to use, they want something that’s easy to install, and they want something that’s easy to manage,” Bracalente said in an interview. “They want to get up and running easily and without any complications.”
They also want a single console through which they can monitor, deploy and update their infrastructure resources, he said.
To that end, Dell is offering midsized enterprises Microsoft’s System Center Essentials management suite—which includes Microsoft System Center Essentials 2007 and Microsoft System Center Virtual Machine Manager 2008—as a packaged solution with Dell’s PRO-Pack (Performance and Resource Optimization Pack) and its own Management Packs that will make it easier for businesses to centrally manage both physical and virtualized environments.
Dell also is offering consulting services around Microsoft’s Hyper-V virtualization technology. The company’s Hyper-V Technology Introduction and FastTrack Design services with Advanced Management options are aimed at helping businesses more quickly deploy and configure Hyper-V infrastructures. In addition, Dell and Microsoft offer reference configurations to help businesses speed up the implementation of Windows Server 2008, Hyper-V and System Center offerings.
Dell’s Infrastructure Consulting members help in the design and implementation of all Microsoft infrastructure.
In addition, the integration of Dell Management Packs with Microsoft’s System Center Operations Manager 2007 and System Center Essentials 2007 enables users to monitor and manage Dell products—from servers and client systems to storage and printers—in multivendor hardware and software environments.
Dell’s Server and Business Client Hardware Update Catalog tools also integrate with Microsoft’s management software—including System Center Configuration Manager 2007, Essentials 2007 and Windows Server Update Service—to make sure that drivers, BIOS and firmware are automatically updated.
The two vendors also are addressing power and cooling costs by giving users the ability to monitor and control the energy efficiency of hardware and software, and Dell’s Server Deployment Pack is integrated with Microsoft’s System Center Configuration Manager to automate the configuration and deployment of Dell’s PowerEdge servers and blade systems.
“We wanted to make this a one-stop shop” for customers, Bracalente said.
Cisco's Nexus 1000v Virtual Switch Is Poised to Push Virtualization Further, Faster
Virtualization in the enterprise is about to open up, and it's not because of VMware's new vSphere, Microsoft's Hyper-V or Cisco's Unified Computing System. The tipping point will come with the release of Cisco's 1000v virtual switch, which will open up virtualization to companies' networking groups, lowering barriers and opening new possibilities.
While I accept that x86-based server virtualization is a growing fact of life in the data center, it wasn't until I took a troubleshooting class at Interop Las Vegas in May that I fully understood why server virtualization is about to go further, faster.
The trigger isn’t virtualization giant VMware's recent release of vSphere 4, although this major platform release is fundamental to further virtualization adoption. The trigger isn’t the recognition of the improvements that Microsoft's Hyper-V and the upcoming release of Windows Server 2008 R2 will bring.
No, server virtualization is poised to go further and faster because of something Cisco is about to do—but it has almost nothing to do with that company's release of its Unified Computing System.
Cisco is wrapping up the beta tests of its Nexus 1000v virtual switch. With the release of VMware's vSphere 4, third-party switches including the Nexus 1000v can be incorporated into the virtualized data center infrastructure. The significance of this news is hard to overstate.
Until now, switching in VMware virtualized environments has been handled by the same people who were creating the virtual machines: the systems group. The network group was often left out of the equation of creating new systems for a number of reasons, not least of which is that there was little or no physical switching work required to bring a new virtual system online. This has meant that a fair number of systems people have been getting a crash course in switching and networking.
As long as the virtualization project was limited to test and development, this wasn't such a big deal. However, the presenters at this tuning and tweaking workshop at Interop quoted analyst figures that said virtualization has penetrated about 10 percent to 15 percent of the data center. This was borne out in an informal audience poll at the session.
With the advent of the Cisco Nexus 1000v switch, which is a fully operational switch realized entirely in software, network staffers who may have raised concerns about and implementation barriers to further server virtualization projects will be able to use the familiar Cisco command line, management tools and scripts to help push virtualization projects forward.
By reducing the friction between the system and network groups--both of which have highly specialized, differentiated and essential skills--VMware has set the stage for a wave of data center virtualization.
I believe that other network switch makers are preparing software-only versions of their wares, but none to my knowledge has been announced. And even Cisco's switch is not commercially available yet. However, making room for best-of-breed, third-party components is a step in the right direction.
For one thing, using Cisco networking infrastructure means that the trained work force ready to tune and tweak the virtual infrastructure just got a lot bigger. Networking staff with architecting and operational experience--even in the purely physical world--will be tremendously useful in creating workable virtualized data centers. And this additional expertise couldn't come a moment too soon if the content from the Interop session is on target.
According to Barb Goldworm, president and chief analyst at FOCUS storage, performance and capacity management are the No. 2 and No. 3 limiting factors in virtualization projects. Adding networking experts already familiar with Cisco tools, and using a Cisco switch that can be slotted into an existing network management system, means that IT managers can focus on storage and capacity management concerns.
While I accept that x86-based server virtualization is a growing fact of life in the data center, it wasn't until I took a troubleshooting class at Interop Las Vegas in May that I fully understood why server virtualization is about to go further, faster.
The trigger isn’t virtualization giant VMware's recent release of vSphere 4, although this major platform release is fundamental to further virtualization adoption. The trigger isn’t the recognition of the improvements that Microsoft's Hyper-V and the upcoming release of Windows Server 2008 R2 will bring.
No, server virtualization is poised to go further and faster because of something Cisco is about to do—but it has almost nothing to do with that company's release of its Unified Computing System.
Cisco is wrapping up the beta tests of its Nexus 1000v virtual switch. With the release of VMware's vSphere 4, third-party switches including the Nexus 1000v can be incorporated into the virtualized data center infrastructure. The significance of this news is hard to overstate.
Until now, switching in VMware virtualized environments has been handled by the same people who were creating the virtual machines: the systems group. The network group was often left out of the equation of creating new systems for a number of reasons, not least of which is that there was little or no physical switching work required to bring a new virtual system online. This has meant that a fair number of systems people have been getting a crash course in switching and networking.
As long as the virtualization project was limited to test and development, this wasn't such a big deal. However, the presenters at this tuning and tweaking workshop at Interop quoted analyst figures that said virtualization has penetrated about 10 percent to 15 percent of the data center. This was borne out in an informal audience poll at the session.
With the advent of the Cisco Nexus 1000v switch, which is a fully operational switch realized entirely in software, network staffers who may have raised concerns about and implementation barriers to further server virtualization projects will be able to use the familiar Cisco command line, management tools and scripts to help push virtualization projects forward.
By reducing the friction between the system and network groups--both of which have highly specialized, differentiated and essential skills--VMware has set the stage for a wave of data center virtualization.
I believe that other network switch makers are preparing software-only versions of their wares, but none to my knowledge has been announced. And even Cisco's switch is not commercially available yet. However, making room for best-of-breed, third-party components is a step in the right direction.
For one thing, using Cisco networking infrastructure means that the trained work force ready to tune and tweak the virtual infrastructure just got a lot bigger. Networking staff with architecting and operational experience--even in the purely physical world--will be tremendously useful in creating workable virtualized data centers. And this additional expertise couldn't come a moment too soon if the content from the Interop session is on target.
According to Barb Goldworm, president and chief analyst at FOCUS storage, performance and capacity management are the No. 2 and No. 3 limiting factors in virtualization projects. Adding networking experts already familiar with Cisco tools, and using a Cisco switch that can be slotted into an existing network management system, means that IT managers can focus on storage and capacity management concerns.