Dell is turning to Via Technologies to power a new server designed to run in dense Web hosting environments. Dell chose the Via chips, rather than processors from Intel or AMD, because not only do they offer a smaller power footprint than their counterparts, but they also can handle 64-bit applications and include hardware-based virtualization capabilities. Dell is the latest system maker looking to such low-power chips for servers. Super Micro Computer is rolling out servers based on Intel's Atom processor.
Dell engineers, tasked with creating a server that offered full functionality but that could fit into highly dense Web hosting environments, are turning to Via Technologies to power the systems.
Dell is preparing to roll out the XS11-VX8, code-named Fortuna, which is powered by Via's Nano processor and aimed at Web hosting companies that tend to buy thousands of servers at a time.
Dell officials said the first prototypes of the new system will start reaching about 15 businesses in about three weeks.
The Fortuna system not only shows Dell to be an innovator, rather than simply a company content to follow the market, but also is a significant step for Via in an x86 market dominated by Intel and Advanced Micro Devices.
Intel Atom shipments slip as the PC processor market bottoms out.
"This is definitely a very significant step for us," said Epan Wu, senior director of chip marketing at Via. "Dell is a very well-known player, but they're also a new opportunity for us … because they're a demonstration [of] how our [technology] can be used."
The system, which in most configurations will sell for about $400 per server, is coming out of Dell's Data Center Solutions group, a 2-year-old unit that particularly addresses issues in what Drew Schulke, product marketing manager for the group, called the "hyperscale market," the businesses that buy thousands, rather than 10 or 20, of servers at a time.
A couple of large Web hosting companies came to Dell saying they had large workloads that, while not complicated, needed systems that were very power-efficient and could fit into a dense environment, Schulke said.
Monday, May 18, 2009
EPA Sanctions Energy Star Specification for Servers
Starting May 15, CTOs and data center managers evaluating various brands of servers for purchase will have another important factor to consider: whether or not the server has passed the qualifications to wear the EPA's Energy Star label as being energy-efficient and enviromentally friendly.
After more than two years of work that started in Santa Clara, Calif., in February 2007, the U.S. Environmental Protection Agency on May 15 officially sanctioned and made public specifications that will determine whether a piece of server hardware qualifies to carry the EPA's well-known blue-and-white Energy Star label.
Starting today, CTOs and data center managers evaluating various brands of servers for purchase will have another important factor to consider: whether or not the server has passed the qualifications to wear the EPA's Energy Star label as being energy-efficient and enviromentally friendly.
The new draft specification for servers specifies three main criteria to earn the label: accurate power-supply management capabilities, virtualization functionality, and energy-efficiency benchmarks and standards for measuring and reporting energy use.
Systems makers Dell, IBM, Hewlett-Packard, Sun Microsystems, Pillar Data Systems, BlueArc, Rackable, and Fujitsu—among others—already are producing servers with reduced power draw, virtualization capability, and cooler-running processors that are expected to pass the new specifications to earn the Energy Star label.
The new specifications do not pertain to blade servers, which will be included in a separate initiative. The Energy Star program decided to separate standard x86-type servers from the denser, more powerful and hotter-running blades due to industry doubts about the accuracy of the metric used to measure server energy use when the machines are idle.
Andrew Fanara, director of the Energy Star Product Specifications Development Team, said in a letter to IT systems makers last month that the EPA will continue to work on developing an appropriate test method for measuring blade system idle power.
Storage array energy-usage specifications are next up on the EPA's agenda. That process will start immediately but isn't expected to be completed until the end of the year, at the earliest. Details on the storage specification development process will be released in the next few weeks.
Fanara, who has been on the road for most of the last two years drumming up support for this initiative, is the man behind getting the multinational industry together on the same page for this result. eWEEK published a Q&A with Fanara about the goals of the Energy Star program in October 2007.
The Energy Star label initiative has proved highly successful for sales of desktop and laptop computers, refrigerators, clothes dryers and a number of other household appliances, because it increases customer awareness of energy efficiency.
Public awareness of the Energy Star label is strong, reaching more than 75 percent of U.S. households, according to a recent nationwide survey. Last year, more than 35 percent of U.S. households sought out an Energy Star-labeled product to purchase, with 80 percent of buyers reporting they are likely to recommend those products to others.
Power Usage Continues to Rise
Climbing power and cooling costs have become key issues for companies trying to rein in data center expenses.
Despite increased awareness of energy savings, overall power consumption continues to rise, due largely to the sheer number of servers being put into production.
Fanara said that open-systems volume servers "are probably the least efficient in terms of power consumption, and are the easiest to convert or replace than other servers."
Those inefficient and prolific servers are in large part responsible for the stunning fact that servers and supporting infrastructure represent 1.5 percent of all electricity used in the United States, a figure that has doubled from 2000-2005 and is expected to double again by 2011, Fanara said.
"Over the next few years, power failures and supply limitations will halt operations at some point in 90 percent of all data center operations. Right now, half of all data centers have insufficient power supplies," Fanara told eWEEK.
After more than two years of work that started in Santa Clara, Calif., in February 2007, the U.S. Environmental Protection Agency on May 15 officially sanctioned and made public specifications that will determine whether a piece of server hardware qualifies to carry the EPA's well-known blue-and-white Energy Star label.
Starting today, CTOs and data center managers evaluating various brands of servers for purchase will have another important factor to consider: whether or not the server has passed the qualifications to wear the EPA's Energy Star label as being energy-efficient and enviromentally friendly.
The new draft specification for servers specifies three main criteria to earn the label: accurate power-supply management capabilities, virtualization functionality, and energy-efficiency benchmarks and standards for measuring and reporting energy use.
Systems makers Dell, IBM, Hewlett-Packard, Sun Microsystems, Pillar Data Systems, BlueArc, Rackable, and Fujitsu—among others—already are producing servers with reduced power draw, virtualization capability, and cooler-running processors that are expected to pass the new specifications to earn the Energy Star label.
The new specifications do not pertain to blade servers, which will be included in a separate initiative. The Energy Star program decided to separate standard x86-type servers from the denser, more powerful and hotter-running blades due to industry doubts about the accuracy of the metric used to measure server energy use when the machines are idle.
Andrew Fanara, director of the Energy Star Product Specifications Development Team, said in a letter to IT systems makers last month that the EPA will continue to work on developing an appropriate test method for measuring blade system idle power.
Storage array energy-usage specifications are next up on the EPA's agenda. That process will start immediately but isn't expected to be completed until the end of the year, at the earliest. Details on the storage specification development process will be released in the next few weeks.
Fanara, who has been on the road for most of the last two years drumming up support for this initiative, is the man behind getting the multinational industry together on the same page for this result. eWEEK published a Q&A with Fanara about the goals of the Energy Star program in October 2007.
The Energy Star label initiative has proved highly successful for sales of desktop and laptop computers, refrigerators, clothes dryers and a number of other household appliances, because it increases customer awareness of energy efficiency.
Public awareness of the Energy Star label is strong, reaching more than 75 percent of U.S. households, according to a recent nationwide survey. Last year, more than 35 percent of U.S. households sought out an Energy Star-labeled product to purchase, with 80 percent of buyers reporting they are likely to recommend those products to others.
Power Usage Continues to Rise
Climbing power and cooling costs have become key issues for companies trying to rein in data center expenses.
Despite increased awareness of energy savings, overall power consumption continues to rise, due largely to the sheer number of servers being put into production.
Fanara said that open-systems volume servers "are probably the least efficient in terms of power consumption, and are the easiest to convert or replace than other servers."
Those inefficient and prolific servers are in large part responsible for the stunning fact that servers and supporting infrastructure represent 1.5 percent of all electricity used in the United States, a figure that has doubled from 2000-2005 and is expected to double again by 2011, Fanara said.
"Over the next few years, power failures and supply limitations will halt operations at some point in 90 percent of all data center operations. Right now, half of all data centers have insufficient power supplies," Fanara told eWEEK.
Hitachi, CommVault Upgrade Data Protection Suite
Hitachi Data Protection Suite 8.0 features CommVault's Simpana 8.0 storage software suite inside HDS hardware. Simpana 8.0 was the 2008 winner of the eWEEK Excellence Award for storage software.
Japan's Hitachi Data Systems and U.S. storage provider CommVault, which have had a longstanding and successful partnership, on May 12 launched an upgraded storage and data protection package aimed at remote offices with virtualized systems.
Hitachi Data Protection Suite 8.0 features CommVault's Simpana 8.0 storage software suite inside HDS hardware. Simpana 7.0 was the 2008 winner of the eWEEK Excellence Award for storage software.
Data Protection Suite 8.0 includes improvements in recovery operations, license management and remote office protection in addition to better virtual server protection, data deduplication, recovery management and content organization, HDS said.
Key features include:
Virtual client support for VMware Infrastructure and Microsoft Hyper-V
Global embedded data deduplication: Block-level deduplication across an entire system's backup and archive copies from disk to tape
Remote office protection: Provides multiple options for centralized, automated policies applied to data residing on workstations and laptops; helps facilitate compliance and search for rapid e-discovery of orphaned, backup and archived data.
HDS' Data Protection Suite 8.0 became available on May 12. For more information, go here.
Japan's Hitachi Data Systems and U.S. storage provider CommVault, which have had a longstanding and successful partnership, on May 12 launched an upgraded storage and data protection package aimed at remote offices with virtualized systems.
Hitachi Data Protection Suite 8.0 features CommVault's Simpana 8.0 storage software suite inside HDS hardware. Simpana 7.0 was the 2008 winner of the eWEEK Excellence Award for storage software.
Data Protection Suite 8.0 includes improvements in recovery operations, license management and remote office protection in addition to better virtual server protection, data deduplication, recovery management and content organization, HDS said.
Key features include:
Virtual client support for VMware Infrastructure and Microsoft Hyper-V
Global embedded data deduplication: Block-level deduplication across an entire system's backup and archive copies from disk to tape
Remote office protection: Provides multiple options for centralized, automated policies applied to data residing on workstations and laptops; helps facilitate compliance and search for rapid e-discovery of orphaned, backup and archived data.
HDS' Data Protection Suite 8.0 became available on May 12. For more information, go here.
UltraBac Software Expands Line of Server Products
UltraBac Software, based in Bellevue, Wash., announced an expansion of its line of server products aimed at cost-conscious businesses. The UltraBac Standard SBS Bundle and UltraBac Premium SBS Bundle are aimed at small to medium-size businesses (SMBs) and provide standard file-by-file backup along with the ability to perform image bare metal disaster recovery to dissimilar hardware and virtual machines.
Pricing is $695 for the standard edition and $895 for the premium edition, with a full year of technical support, and update protection is also included with each product. UltraBac says by bundling file-by-file backup with bare metal disaster recovery, users can recover a complete system in a matter of minutes. Both products also include the ability to restore the image of a failed SBS server to a variety of drives, controllers and virtual machines such as VMware or Microsoft Hyper-V.
The UltraBac Standard SBS Bundle includes protection for Exchange, SharePoint and exclusive locked files. The solution protects the server as well as up to 100 Windows clients. The premium SBS Bundle expands on this offering by providing the SQL component many small businesses require in current network environments. Both products have been specifically tailored to Microsoft Small Business Server users.
Company CEO Morgan Edwards says it is not financially feasible for every small business to have a full IT staff, let alone even one full-time network administrator. Edwards says the tech industry should not expect midmarket companies to have the manpower to invest hours in learning best practices of data protection – it is UltraBac’s job to provide it.
“Our software is designed to easily install and recover reliably right out of the box,” Edwards said. “Unlike some of our competitors who require the purchase of support and maintenance with their small business products, UltraBac Software does not have this requirement.”
Edwards said the company doesn’t believe in the illusion of low pricing and then tacking on extra fees on the back end, and he said small businesses don't need those types of headaches. “UltraBac Software believes in being as straightforward and upfront as our products are,” he said. "We recognize the strength of the small business community and continually strive to provide them with the features their unique environments require."
Pricing is $695 for the standard edition and $895 for the premium edition, with a full year of technical support, and update protection is also included with each product. UltraBac says by bundling file-by-file backup with bare metal disaster recovery, users can recover a complete system in a matter of minutes. Both products also include the ability to restore the image of a failed SBS server to a variety of drives, controllers and virtual machines such as VMware or Microsoft Hyper-V.
The UltraBac Standard SBS Bundle includes protection for Exchange, SharePoint and exclusive locked files. The solution protects the server as well as up to 100 Windows clients. The premium SBS Bundle expands on this offering by providing the SQL component many small businesses require in current network environments. Both products have been specifically tailored to Microsoft Small Business Server users.
Company CEO Morgan Edwards says it is not financially feasible for every small business to have a full IT staff, let alone even one full-time network administrator. Edwards says the tech industry should not expect midmarket companies to have the manpower to invest hours in learning best practices of data protection – it is UltraBac’s job to provide it.
“Our software is designed to easily install and recover reliably right out of the box,” Edwards said. “Unlike some of our competitors who require the purchase of support and maintenance with their small business products, UltraBac Software does not have this requirement.”
Edwards said the company doesn’t believe in the illusion of low pricing and then tacking on extra fees on the back end, and he said small businesses don't need those types of headaches. “UltraBac Software believes in being as straightforward and upfront as our products are,” he said. "We recognize the strength of the small business community and continually strive to provide them with the features their unique environments require."
Windows 7's XP Mode Will Be a Desktop Virtualization Boost
Windows 7's XP Mode combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance. By tapping desktop-based virtualization as a bridge for Windows software compatibility gaps, organizations could achieve a smooth transition from Windows to a competing platform.
Last month, Microsoft announced that Windows 7 will include an XP Mode, which combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance.
When I heard about XP Mode, I was immediately struck by the marketing benefits that the feature can provide for non-Windows platforms. That's because tapping desktop-based virtualization as a bridge for Windows software compatibility gaps is one of the keys to achieving a smooth transition from Windows to a competing platform.
When someone asks me about moving away from Windows to Linux or the Mac, I tell them that they'll most likely find native Mac or Linux replacements for their Windows applications, but that it may be necessary to run a copy of Windows in a virtual machine for certain applications.
I keep a Windows VM on my Linux notebook for things like product testing and attending GoToMeeting conferences. (Microsoft's own Live Meeting is, by comparison, very Linux-friendly.) The Windows VM approach to platform-switching can work pretty well, but this tactic does have various wrinkles.
First, you need a licensed copy of Windows and enough RAM to devote to the Windows guest without starving your host OS. Also, you'll need the same sort of security software and patching policies you would apply to a regular Windows instance. Finally, depending on the type of application you're dealing with, performance might be an issue, and applications that require direct access to hardware resources might not work at all.
Now that Microsoft is pushing virtualization as a crutch for migrating from XP to Windows 7, it may occur to many that upgrading from XP to 7 wouldn't prove significantly more painful than moving from XP to OS X or Linux—particularly since XP Mode on Windows 7 shares most of the same wrinkles that mar XP on Linux or Mac setups.
More importantly, though, XP Mode will introduce the idea and the practice of running multiple, reasonably isolated OS instances on a single machine to a broader pool of users. As more people embrace the practice, I expect to see Microsoft and other vendors work out more of its kinks and, eventually, offer new classes of products aimed specifically at enabling these Russian doll desktop scenarios.
Despite the possibly beneficial side effects of XP Mode for alternative platforms, I believe that Microsoft and Windows are best-positioned to take advantage of the rise of the virtual desktop machines.
As eWEEK Labs has discussed recently, the lines between personal and company devices and computing environments are now more blurry than ever. As I see it, the best way to provide both individual users and large organizations with the control they require to satisfy their needs is to provide multiple virtualized environments on a single piece of hardware.
Given its advantages around available applications, integrated identity and desktop management capabilities, and mind and market share among businesses, Windows seems to be the clear option for delivering the managed corporate desktop element of these mixed environments.
XP Mode could be a first step toward colonizing the virtual desktop territories, but for something like this to really take off, Microsoft will have to begin approaching VMs as a first-class "hardware" platform and look toward stripping out bits that aren't required in these environments. Also, we'll have to see more advances in bare-metal desktop and notebook hypervisor technologies, like those demonstrated by Citrix in the form of its Project Independence.
Maybe desktop platform diversity and Microsoft monoculture can live side by side, after all. If nothing else, Microsoft would probably be less touchy about mounting "I'm a Mac" choruses if managed Windows instances lurked beneath more of Apple's matte aluminum covers.
Last month, Microsoft announced that Windows 7 will include an XP Mode, which combines the company's desktop and presentation virtualization technologies to serve up applications that won't run properly on Windows 7 from a virtual XP SP3 instance.
When I heard about XP Mode, I was immediately struck by the marketing benefits that the feature can provide for non-Windows platforms. That's because tapping desktop-based virtualization as a bridge for Windows software compatibility gaps is one of the keys to achieving a smooth transition from Windows to a competing platform.
When someone asks me about moving away from Windows to Linux or the Mac, I tell them that they'll most likely find native Mac or Linux replacements for their Windows applications, but that it may be necessary to run a copy of Windows in a virtual machine for certain applications.
I keep a Windows VM on my Linux notebook for things like product testing and attending GoToMeeting conferences. (Microsoft's own Live Meeting is, by comparison, very Linux-friendly.) The Windows VM approach to platform-switching can work pretty well, but this tactic does have various wrinkles.
First, you need a licensed copy of Windows and enough RAM to devote to the Windows guest without starving your host OS. Also, you'll need the same sort of security software and patching policies you would apply to a regular Windows instance. Finally, depending on the type of application you're dealing with, performance might be an issue, and applications that require direct access to hardware resources might not work at all.
Now that Microsoft is pushing virtualization as a crutch for migrating from XP to Windows 7, it may occur to many that upgrading from XP to 7 wouldn't prove significantly more painful than moving from XP to OS X or Linux—particularly since XP Mode on Windows 7 shares most of the same wrinkles that mar XP on Linux or Mac setups.
More importantly, though, XP Mode will introduce the idea and the practice of running multiple, reasonably isolated OS instances on a single machine to a broader pool of users. As more people embrace the practice, I expect to see Microsoft and other vendors work out more of its kinks and, eventually, offer new classes of products aimed specifically at enabling these Russian doll desktop scenarios.
Despite the possibly beneficial side effects of XP Mode for alternative platforms, I believe that Microsoft and Windows are best-positioned to take advantage of the rise of the virtual desktop machines.
As eWEEK Labs has discussed recently, the lines between personal and company devices and computing environments are now more blurry than ever. As I see it, the best way to provide both individual users and large organizations with the control they require to satisfy their needs is to provide multiple virtualized environments on a single piece of hardware.
Given its advantages around available applications, integrated identity and desktop management capabilities, and mind and market share among businesses, Windows seems to be the clear option for delivering the managed corporate desktop element of these mixed environments.
XP Mode could be a first step toward colonizing the virtual desktop territories, but for something like this to really take off, Microsoft will have to begin approaching VMs as a first-class "hardware" platform and look toward stripping out bits that aren't required in these environments. Also, we'll have to see more advances in bare-metal desktop and notebook hypervisor technologies, like those demonstrated by Citrix in the form of its Project Independence.
Maybe desktop platform diversity and Microsoft monoculture can live side by side, after all. If nothing else, Microsoft would probably be less touchy about mounting "I'm a Mac" choruses if managed Windows instances lurked beneath more of Apple's matte aluminum covers.
IBM Debuts System S Stream Computing Platform
IBM Debuts System S Stream Computing Platform
At IBM's annual investor meeting on May 13, the IT infrastructure and software company announces the commercialization of System S, IBM's stream computing software that advances parallelism to deliver real-time business analytics.
At its annual investor meeting on May 13, IBM announced the commercialization of System S, the company's stream computing software that advances parallelism to deliver real-time business analytics capability.
IBM also announced the opening of the IBM European Stream Computing Center, headquartered in Dublin, Ireland. The center will "serve as a hub of research, customer support and advanced testing for what is expected to be a growing base of European clients who wish to apply stream computing to their most challenging business problems," IBM said in a news release.
Nagui Halim, chief scientist for IBM's System S project, said the effort started as a project in IBM Research at the end of 2003 that became one of the largest software research projects ever conducted inside IBM Research. Halim said with System S and stream computing the focus is on delivering insight and foresight, not hindsight. According to the IBM release:
System S is built for perpetual analytics—utilizing a new streaming architecture and breakthrough mathematical algorithms to create a forward-looking analysis of data from any source—narrowing down precisely what people are looking for and continuously refining the answer as additional data is made available.
For example, System S can analyze hundreds or thousands of simultaneous data streams—stock prices, retail sales, weather reports, etc.—and deliver nearly instantaneous analysis to business leaders who need to make split-second decisions. The software can help all organizations that need to react to changing conditions in real time, such as government and law enforcement agencies, financial institutions, retailers, transportation companies, healthcare organizations, and more.
Moreover, IBM is commercializing the technology at a time when clients need it most—during the global economic crisis. "Using computers to rapidly analyze multiple streams of diverse, unstructured and incompatible data sources in real time, enabling fast, accurate and insightful decisions," as IBM described the potential of System S, can be a competitive advantage for companies.
For instance, global market data is growing at a rapid rate and "needs to be ingested, decoded, processed and responded to in short order," and System S enables users to do that, IBM contended.
Indeed, Halim said TD Securities is using System S to ingest more than 5 million bits of trading data per microsecond to make faster financial trading decisions. To match the capacity of the system, a trader would have to be able to read the entire works of Shakespeare 10 times in less than 1 second and then identify and execute a stock trade faster than a hummingbird flaps its wings, he said.
"System S software is another example of IBM helping clients through our long-term investments in business analytics and advanced mathematics," John Kelly III, IBM senior vice president and director of IBM Research, said in a statement. "The ability to manage and analyze incoming data in real time, and use it to make smarter decisions, can help businesses and other enterprises differentiate themselves."
According to the release:
IBM is making System S trial code available at no cost to help clients better understand the software's capabilities and how they can take advantage of it for their business. This trial code includes developer tools, adapters and software to test applications.
Halim said the System S software can be configured to run on a supercomputer, a cluster of blades or even a single computer. Its first iteration is aimed at commodity hardware, he said. And it can be configured to attack a broad set of problems across a wide range of industries, he said.
Moreover, Halim said to make the System S concept work, IBM had to come up with a new language. However, as a computer scientist experienced in using available tools, he said, "I was reluctant to embark on creating a new language, but stream processing is a significant development and warrants a new language."
That language is SPADE, which stands for Stream Processing Application Declarative Engine. "SPADE allows you to describe the topology of what you're working on," Halim said.
A description of SPADE on an IBM Research Web page reads, "SPADE ... is a programming language and a compilation infrastructure, specifically built for streaming systems. It is designed to facilitate the programming of large streaming applications, as well as their efficient and effective mapping to a wide variety of target architectures, including clusters, multicore architectures and special processors such as the Cell processor. The SPADE programming language allows stream processing applications to be written with the finest granularity of operators that is meaningful to the application, and the SPADE compiler appropriately fuses operators and generates a stream processing graph to be run on the Stream Processing Core of System S."
Halim said in addition to the SPADE language, developers can use the SPADE compiler and Eclipse-based IDE (integrated development environment) along with administration, configuration, and installation tools and adapters to build and deploy System S applications.
"Traditional computing models retrospectively analyze stored data and cannot continuously process massive amounts of incoming data streams that affect critical decision making. System S is designed to help clients become more 'real-world aware,' seeing and responding to changes across complex systems," IBM said in the release.
According to IBM's release, other early uses of System S include:
Uppsala University and the Swedish Institute of Space Physics are using System S to better understand "space weather," which can influence energy transmission over power lines, communications via radio and TV signals, airline and space travel, and satellites. By using the LOIS Space Center radio facility in Sweden to analyze radio emissions from space in three dimensions, scientists use this technology to compile endless amounts of data and extract predictions on activities in space. Since researchers need to measure signals from space over large time spans, the raw data generated by even one antenna quickly becomes too large to handle or store. System S analyzes the data immediately as it streams from sensors. Over the next year or so the project is expected to perform analytics on at least 6 gigabytes per second or 21,600 gigabytes per hour – the equivalent of all the Web pages on the Internet.
The Marine Institute of Ireland is using System S to better understand fragile marine ecosystems. As a core component of this collaboration, a real-time distributed stream analytical fabric for environmental monitoring and management is under development. Acting on large volumes of underwater acoustic data and processing it in real-time, the Institute extracts useful information such as species identification of marine life, population count and location. [...]
IBM and the University of Ontario Institute of Technology (UOIT) are using System S to help doctors detect subtle changes in the condition of critically ill premature babies. The software ingests a constant stream of biomedical data, such as heart rate and respiration, along with clinical information about the babies. Monitoring "preemies" as a patient group is especially important as certain life-threatening conditions such as infection may be detected up to 24 hours in advance by observing changes in physiological data streams. The type of information that will come out of the use of System S is not available today. Currently, physicians monitoring preemies rely on a paper-based process that involves manually looking at the readings from various monitors and getting feedback from the nurses providing care.
At IBM's annual investor meeting on May 13, the IT infrastructure and software company announces the commercialization of System S, IBM's stream computing software that advances parallelism to deliver real-time business analytics.
At its annual investor meeting on May 13, IBM announced the commercialization of System S, the company's stream computing software that advances parallelism to deliver real-time business analytics capability.
IBM also announced the opening of the IBM European Stream Computing Center, headquartered in Dublin, Ireland. The center will "serve as a hub of research, customer support and advanced testing for what is expected to be a growing base of European clients who wish to apply stream computing to their most challenging business problems," IBM said in a news release.
Nagui Halim, chief scientist for IBM's System S project, said the effort started as a project in IBM Research at the end of 2003 that became one of the largest software research projects ever conducted inside IBM Research. Halim said with System S and stream computing the focus is on delivering insight and foresight, not hindsight. According to the IBM release:
System S is built for perpetual analytics—utilizing a new streaming architecture and breakthrough mathematical algorithms to create a forward-looking analysis of data from any source—narrowing down precisely what people are looking for and continuously refining the answer as additional data is made available.
For example, System S can analyze hundreds or thousands of simultaneous data streams—stock prices, retail sales, weather reports, etc.—and deliver nearly instantaneous analysis to business leaders who need to make split-second decisions. The software can help all organizations that need to react to changing conditions in real time, such as government and law enforcement agencies, financial institutions, retailers, transportation companies, healthcare organizations, and more.
Moreover, IBM is commercializing the technology at a time when clients need it most—during the global economic crisis. "Using computers to rapidly analyze multiple streams of diverse, unstructured and incompatible data sources in real time, enabling fast, accurate and insightful decisions," as IBM described the potential of System S, can be a competitive advantage for companies.
For instance, global market data is growing at a rapid rate and "needs to be ingested, decoded, processed and responded to in short order," and System S enables users to do that, IBM contended.
Indeed, Halim said TD Securities is using System S to ingest more than 5 million bits of trading data per microsecond to make faster financial trading decisions. To match the capacity of the system, a trader would have to be able to read the entire works of Shakespeare 10 times in less than 1 second and then identify and execute a stock trade faster than a hummingbird flaps its wings, he said.
"System S software is another example of IBM helping clients through our long-term investments in business analytics and advanced mathematics," John Kelly III, IBM senior vice president and director of IBM Research, said in a statement. "The ability to manage and analyze incoming data in real time, and use it to make smarter decisions, can help businesses and other enterprises differentiate themselves."
According to the release:
IBM is making System S trial code available at no cost to help clients better understand the software's capabilities and how they can take advantage of it for their business. This trial code includes developer tools, adapters and software to test applications.
Halim said the System S software can be configured to run on a supercomputer, a cluster of blades or even a single computer. Its first iteration is aimed at commodity hardware, he said. And it can be configured to attack a broad set of problems across a wide range of industries, he said.
Moreover, Halim said to make the System S concept work, IBM had to come up with a new language. However, as a computer scientist experienced in using available tools, he said, "I was reluctant to embark on creating a new language, but stream processing is a significant development and warrants a new language."
That language is SPADE, which stands for Stream Processing Application Declarative Engine. "SPADE allows you to describe the topology of what you're working on," Halim said.
A description of SPADE on an IBM Research Web page reads, "SPADE ... is a programming language and a compilation infrastructure, specifically built for streaming systems. It is designed to facilitate the programming of large streaming applications, as well as their efficient and effective mapping to a wide variety of target architectures, including clusters, multicore architectures and special processors such as the Cell processor. The SPADE programming language allows stream processing applications to be written with the finest granularity of operators that is meaningful to the application, and the SPADE compiler appropriately fuses operators and generates a stream processing graph to be run on the Stream Processing Core of System S."
Halim said in addition to the SPADE language, developers can use the SPADE compiler and Eclipse-based IDE (integrated development environment) along with administration, configuration, and installation tools and adapters to build and deploy System S applications.
"Traditional computing models retrospectively analyze stored data and cannot continuously process massive amounts of incoming data streams that affect critical decision making. System S is designed to help clients become more 'real-world aware,' seeing and responding to changes across complex systems," IBM said in the release.
According to IBM's release, other early uses of System S include:
Uppsala University and the Swedish Institute of Space Physics are using System S to better understand "space weather," which can influence energy transmission over power lines, communications via radio and TV signals, airline and space travel, and satellites. By using the LOIS Space Center radio facility in Sweden to analyze radio emissions from space in three dimensions, scientists use this technology to compile endless amounts of data and extract predictions on activities in space. Since researchers need to measure signals from space over large time spans, the raw data generated by even one antenna quickly becomes too large to handle or store. System S analyzes the data immediately as it streams from sensors. Over the next year or so the project is expected to perform analytics on at least 6 gigabytes per second or 21,600 gigabytes per hour – the equivalent of all the Web pages on the Internet.
The Marine Institute of Ireland is using System S to better understand fragile marine ecosystems. As a core component of this collaboration, a real-time distributed stream analytical fabric for environmental monitoring and management is under development. Acting on large volumes of underwater acoustic data and processing it in real-time, the Institute extracts useful information such as species identification of marine life, population count and location. [...]
IBM and the University of Ontario Institute of Technology (UOIT) are using System S to help doctors detect subtle changes in the condition of critically ill premature babies. The software ingests a constant stream of biomedical data, such as heart rate and respiration, along with clinical information about the babies. Monitoring "preemies" as a patient group is especially important as certain life-threatening conditions such as infection may be detected up to 24 hours in advance by observing changes in physiological data streams. The type of information that will come out of the use of System S is not available today. Currently, physicians monitoring preemies rely on a paper-based process that involves manually looking at the readings from various monitors and getting feedback from the nurses providing care.
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set
Oracle's Virtual Iron Buyout Will Provide Essential VM Tool Set
Oracle has a number of reasons to want to own a mature virtualization tool set, and acquiring Virtual Iron contributes to that goal. To become the full-service IT infrastructure company it envisions, Oracle needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a so-called virtualization tax to third-party providers such as VMware.
Oracle, a company with its own permanent mergers and acquisitions office, is adding an important ingredient to its product catalog in a quest to become the newest all-purpose IT systems company: a new-generation tool box that will administer both Windows and Linux virtualization deployments.
When it closes a deal to acquire Virtual Iron announced May 13, Oracle will join EMC (owner of VMware), Microsoft (Hyper-V), Citrix Systems (XenServer) and Sun Microsystems (Sun Containers, xVM Ops Center and VirtualBox software) as one of the only IT systems providers that own server virtualization products.
After the summer of 2009, that number of companies will shrink by one, because Sun also will have become property of Oracle in the widely reported $7.4 billion acquisition deal announced April 20.
VMware products are installed on about 85 percent of all enterprise IT systems, with the others all claiming much smaller pieces of the virtualization pie.
Oracle has a number of reasons to want to own a mature virtualization tool set.
First, to become the full-service IT infrastructure company it envisions, it needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a "virtualization tax" to third-party providers like VMware or any other company.
Secondly, Oracle needs a more complete set of tools for its home-developed Xen-based hypervisor, Oracle VM. It's not an accident that Virtual Iron's platform also is Xen-based, built on open-source code. Oracle's virtual machine controls currently do not have management features as good as Virtual Iron's LivePower, which offers much greater control of server power consumption. So the acquisition also is a green IT move for Oracle.
Oracle intends to bundle Virtual Iron's tools with its own VM layer to give users a full-stack management console for both virtual and physical systems. Virtual Iron also features better capacity utilization and virtual server configuration tools than Oracle offers today.
With Virtual Iron leaving the ranks of providers of independent virtualization options, only a small number of them remain in the market, including Parallels, Debian's OpenVZ and Ubuntu Linux.
"Market consolidation seems to be upon us," Galen Schreck, an analyst with Forrester Research, told eWEEK. "Plus, Citrix's move to give away a full-featured version of XenServer makes it pretty hard to charge for this kind of functionality.
"What's a company like Virtual Iron to do? Both are Xen-based, and have pretty similar capabilities. Sure, Citrix charges extra for its most advanced management, but you get a lot of functionality for no money whatsoever. Meanwhile, VMware is the clear market leader with Microsoft being the next most popular platform in a distant second place."
Virtual Iron aimed its wares mostly at the small and midsize business markets. Is Oracle making a play for the smaller markets with this acquisition?
"I don't think this acquisition is about smaller markets—it's more of an upgrade to the management capabilities of Oracle's own Xen-based hypervisor," Schreck said. "They get a better UI [user interface] as well as dynamic workload management and power management."
Schreck said it is still unclear how Oracle will handle the integration of both Sun and Virtual Iron into its catalog.
"There is definitely some overlap here," Schreck said. "Neither product has a lot of customers, so it's not a question of which has more market traction. Sun's xVM Ops Center is a nice product, but Virtual Iron is more Windows-friendly—which gives Oracle immediate access to the largest virtualization market."
'Interesting dynamic' with VMware
The Virtual Iron acquisition creates an interesting competitive dynamic with VMware, Zeus Kerravala of The Yankee Group told eWEEK.
"They're not the best of partners, but they do some work together," Kerravala said. "As for Sun, it [Virtual Iron] is a parallel offering. Oracle didn't have any way to virtualize Windows or Linux environments."
Katherine Egbert, an analyst with Jefferies & Co., said she believes the acquisition is a clear sign that Oracle wants to move deeper into the midmarket, a place it has hardly penetrated in the past.
"It is a midmarket play. Virtual Iron has lot of government and education [customers] in their installed base," Egbert said. "Oracle gets the full stack now, everything from the bare-metal hypervisor up to the highest-level user application."
Oracle has a number of reasons to want to own a mature virtualization tool set, and acquiring Virtual Iron contributes to that goal. To become the full-service IT infrastructure company it envisions, Oracle needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a so-called virtualization tax to third-party providers such as VMware.
Oracle, a company with its own permanent mergers and acquisitions office, is adding an important ingredient to its product catalog in a quest to become the newest all-purpose IT systems company: a new-generation tool box that will administer both Windows and Linux virtualization deployments.
When it closes a deal to acquire Virtual Iron announced May 13, Oracle will join EMC (owner of VMware), Microsoft (Hyper-V), Citrix Systems (XenServer) and Sun Microsystems (Sun Containers, xVM Ops Center and VirtualBox software) as one of the only IT systems providers that own server virtualization products.
After the summer of 2009, that number of companies will shrink by one, because Sun also will have become property of Oracle in the widely reported $7.4 billion acquisition deal announced April 20.
VMware products are installed on about 85 percent of all enterprise IT systems, with the others all claiming much smaller pieces of the virtualization pie.
Oracle has a number of reasons to want to own a mature virtualization tool set.
First, to become the full-service IT infrastructure company it envisions, it needs more control of virtualized software and hardware for all its deployments. Oracle doesn't want to keep paying a "virtualization tax" to third-party providers like VMware or any other company.
Secondly, Oracle needs a more complete set of tools for its home-developed Xen-based hypervisor, Oracle VM. It's not an accident that Virtual Iron's platform also is Xen-based, built on open-source code. Oracle's virtual machine controls currently do not have management features as good as Virtual Iron's LivePower, which offers much greater control of server power consumption. So the acquisition also is a green IT move for Oracle.
Oracle intends to bundle Virtual Iron's tools with its own VM layer to give users a full-stack management console for both virtual and physical systems. Virtual Iron also features better capacity utilization and virtual server configuration tools than Oracle offers today.
With Virtual Iron leaving the ranks of providers of independent virtualization options, only a small number of them remain in the market, including Parallels, Debian's OpenVZ and Ubuntu Linux.
"Market consolidation seems to be upon us," Galen Schreck, an analyst with Forrester Research, told eWEEK. "Plus, Citrix's move to give away a full-featured version of XenServer makes it pretty hard to charge for this kind of functionality.
"What's a company like Virtual Iron to do? Both are Xen-based, and have pretty similar capabilities. Sure, Citrix charges extra for its most advanced management, but you get a lot of functionality for no money whatsoever. Meanwhile, VMware is the clear market leader with Microsoft being the next most popular platform in a distant second place."
Virtual Iron aimed its wares mostly at the small and midsize business markets. Is Oracle making a play for the smaller markets with this acquisition?
"I don't think this acquisition is about smaller markets—it's more of an upgrade to the management capabilities of Oracle's own Xen-based hypervisor," Schreck said. "They get a better UI [user interface] as well as dynamic workload management and power management."
Schreck said it is still unclear how Oracle will handle the integration of both Sun and Virtual Iron into its catalog.
"There is definitely some overlap here," Schreck said. "Neither product has a lot of customers, so it's not a question of which has more market traction. Sun's xVM Ops Center is a nice product, but Virtual Iron is more Windows-friendly—which gives Oracle immediate access to the largest virtualization market."
'Interesting dynamic' with VMware
The Virtual Iron acquisition creates an interesting competitive dynamic with VMware, Zeus Kerravala of The Yankee Group told eWEEK.
"They're not the best of partners, but they do some work together," Kerravala said. "As for Sun, it [Virtual Iron] is a parallel offering. Oracle didn't have any way to virtualize Windows or Linux environments."
Katherine Egbert, an analyst with Jefferies & Co., said she believes the acquisition is a clear sign that Oracle wants to move deeper into the midmarket, a place it has hardly penetrated in the past.
"It is a midmarket play. Virtual Iron has lot of government and education [customers] in their installed base," Egbert said. "Oracle gets the full stack now, everything from the bare-metal hypervisor up to the highest-level user application."
Oracle's Latest Acquisition Is Virtual Iron
Oracle purchased Virtual Iron, a designer of server virtualization software, after months of rumors. The acquisition will allow Oracle to compete more fully within the virtualization market against VMware, Microsoft and Citrix Systems, by offering a more robust Oracle VM that incorporates Virtual Iron’s technology. Oracle has been on something of a buying spree in 2009, including its recent $7.4 billion buyout of Sun Microsystems.
Oracle has purchased Virtual Iron, designer of server virtualization software for cost-conscious businesses.
The latest Oracle acquisition should come as a surprise to few. In March 2009, rumors drifted that Oracle was close to acquiring Virtual Iron, after the release of a research report by Katherine Egbert, an analyst with Jefferies & Co., who wrote that Oracle would likely purchase the company in order to strengthen its server virtualization management capabilities.
Virtual Iron, which started in 2003, specializes in low-cost virtualization products for a wide range of businesses, from small mom-and-pop operations to the enterprise. Previously, it attempted to challenge VMware and Citrix Systems, mostly on price point. Considered a relatively minor player with middling market share in the virtualization market, the company had roughly 2,000 customers and a reported $65 million from its last round of funding.
In a statement, Oracle suggested that incorporating Virtual Iron’s technology would allow it to "provide more comprehensive and dynamic resource management across the full software stack." The acquisition of Virtual Iron, along with the additional virtualization technology Oracle now owns in the wake of the Sun Microsystems deal, should allow the software company to better compete against VMware, as well as Citrix Systems and Microsoft with its Hyper-V technology.
Like Oracle, Virtual Iron utilizes the open-source Xen hypervisor, and its products could potentially be used by Oracle to strengthen the Oracle VM, presenting customers with the option to virtualize within the Oracle ecosystem as opposed to relying on a product from VMware or other players in the space.
Financial details of the agreement, which is expected to close this summer, were not disclosed.
"With the addition of Virtual Iron, Oracle expects to enable customers to more dynamically manage their server capacity and optimize their power consumption," Wim Coekaerts, vice president of Linux and Virtualization Engineering for Oracle, said in a statement. "The acquisition is consistent with Oracle’s strategy to provide comprehensive enterprise software management and will facilitate more efficient management of application service levels."
Even in the face of the global recession, Oracle has continued to acquire companies in 2009, following on the 11 purchased in 2008. Its first buyout of the year was mValent, a small company that offered configuration management solutions.
Oracle landed a much bigger fish, however, in April 2009, when it announced plans to acquire Sun in a deal worth roughly $7.4 billion, or $9.50 a share. The Sun acquisition allows Oracle to more fully leverage Java and Solaris for many of its products, and compete more aggressively against IBM and its DB2 database middleware.
Oracle has purchased Virtual Iron, designer of server virtualization software for cost-conscious businesses.
The latest Oracle acquisition should come as a surprise to few. In March 2009, rumors drifted that Oracle was close to acquiring Virtual Iron, after the release of a research report by Katherine Egbert, an analyst with Jefferies & Co., who wrote that Oracle would likely purchase the company in order to strengthen its server virtualization management capabilities.
Virtual Iron, which started in 2003, specializes in low-cost virtualization products for a wide range of businesses, from small mom-and-pop operations to the enterprise. Previously, it attempted to challenge VMware and Citrix Systems, mostly on price point. Considered a relatively minor player with middling market share in the virtualization market, the company had roughly 2,000 customers and a reported $65 million from its last round of funding.
In a statement, Oracle suggested that incorporating Virtual Iron’s technology would allow it to "provide more comprehensive and dynamic resource management across the full software stack." The acquisition of Virtual Iron, along with the additional virtualization technology Oracle now owns in the wake of the Sun Microsystems deal, should allow the software company to better compete against VMware, as well as Citrix Systems and Microsoft with its Hyper-V technology.
Like Oracle, Virtual Iron utilizes the open-source Xen hypervisor, and its products could potentially be used by Oracle to strengthen the Oracle VM, presenting customers with the option to virtualize within the Oracle ecosystem as opposed to relying on a product from VMware or other players in the space.
Financial details of the agreement, which is expected to close this summer, were not disclosed.
"With the addition of Virtual Iron, Oracle expects to enable customers to more dynamically manage their server capacity and optimize their power consumption," Wim Coekaerts, vice president of Linux and Virtualization Engineering for Oracle, said in a statement. "The acquisition is consistent with Oracle’s strategy to provide comprehensive enterprise software management and will facilitate more efficient management of application service levels."
Even in the face of the global recession, Oracle has continued to acquire companies in 2009, following on the 11 purchased in 2008. Its first buyout of the year was mValent, a small company that offered configuration management solutions.
Oracle landed a much bigger fish, however, in April 2009, when it announced plans to acquire Sun in a deal worth roughly $7.4 billion, or $9.50 a share. The Sun acquisition allows Oracle to more fully leverage Java and Solaris for many of its products, and compete more aggressively against IBM and its DB2 database middleware.
Tuesday, May 12, 2009
Cisco Offers Cloud Computing Infrastructure for Service Providers
Cisco’s Unified Delivery Service initiative combines its Nexus 7000 networking switch offering, Unified Computing System converged data center offering, CRS-1 carrier network platform and new CRS-1 modules to give service providers the building blocks to creating cloud computing environments. The plan also would extend the reach of virtualization beyond the data center to use between data centers and next-generation Web-based networks. Driving the need for the Unified Delivery Service is the growing demand from businesses to get more services out of their networks, Cisco officials said.
Cisco Systems is pulling together key pieces of its data center and networking portfolios to create a blueprint for building a cloud computing infrastructure for service providers.
Dubbed the Unified Delivery Service, the initiative combines such Cisco technologies as its high-end enterprise-level Nexus 7000 Switch Series, CRS-1 carrier network platform, Unified Computing System converged data center offering and a new carrier routing system with the capabilities offered through IP NGNs (IP next-generation networks).
Cisco officials said the new initiative, announced May 12, extends their data center portfolio and is part of its Data Center 3.0 strategy, which looks to unify the various parts of data centers and simply the management and operation of the facilities.
Check out the building blocks for Cisco's UCS initiative.
The Unified Delivery Service initiative is driven in large part by the demands of businesses for more services from their data center networks—Cisco is estimating 46 percent annual growth in global Internet traffic—and the promise of cloud computing.
"The unification of the data center and the IP Next Generation Network is a natural progression not just in the evolution of networking,” Kelly Ahuja, senior vice president and general manager of Cisco’s service provider routing technology group, said in a statement. “It also builds the foundation for innovative service providers … to enable them to optimize their networks toward delivering new revenue-generating cloud-based services."
Cisco’s new CRS-1 Carrier Routing System includes two new 10 Gigabit modules and a 40 Gigabit forwarding processor for the CRS-1 platform. The system is designed to extend virtualization technology from the data center to the IP NGN core. It also addresses the needs of peering and interconnect applications for a service provider's data centers.
Using the CRS-1 platform and new modules, service providers can virtualize traffic and network operations on a per-service or per-customer basis. They are combined with the Nexus 7000 Series Switch and Cisco’s UCS, which converges blade servers, storage, networking, virtualization and management software into a single data center entity.
Cisco officials said the Unified Service Delivery initiative is designed to enable virtualization in the data center, between data centers and across IP NGNs.
Cisco Systems is pulling together key pieces of its data center and networking portfolios to create a blueprint for building a cloud computing infrastructure for service providers.
Dubbed the Unified Delivery Service, the initiative combines such Cisco technologies as its high-end enterprise-level Nexus 7000 Switch Series, CRS-1 carrier network platform, Unified Computing System converged data center offering and a new carrier routing system with the capabilities offered through IP NGNs (IP next-generation networks).
Cisco officials said the new initiative, announced May 12, extends their data center portfolio and is part of its Data Center 3.0 strategy, which looks to unify the various parts of data centers and simply the management and operation of the facilities.
Check out the building blocks for Cisco's UCS initiative.
The Unified Delivery Service initiative is driven in large part by the demands of businesses for more services from their data center networks—Cisco is estimating 46 percent annual growth in global Internet traffic—and the promise of cloud computing.
"The unification of the data center and the IP Next Generation Network is a natural progression not just in the evolution of networking,” Kelly Ahuja, senior vice president and general manager of Cisco’s service provider routing technology group, said in a statement. “It also builds the foundation for innovative service providers … to enable them to optimize their networks toward delivering new revenue-generating cloud-based services."
Cisco’s new CRS-1 Carrier Routing System includes two new 10 Gigabit modules and a 40 Gigabit forwarding processor for the CRS-1 platform. The system is designed to extend virtualization technology from the data center to the IP NGN core. It also addresses the needs of peering and interconnect applications for a service provider's data centers.
Using the CRS-1 platform and new modules, service providers can virtualize traffic and network operations on a per-service or per-customer basis. They are combined with the Nexus 7000 Series Switch and Cisco’s UCS, which converges blade servers, storage, networking, virtualization and management software into a single data center entity.
Cisco officials said the Unified Service Delivery initiative is designed to enable virtualization in the data center, between data centers and across IP NGNs.
Juniper Ethernet Switch Optimized for Cloud Computing
Juniper Networks is rolling out a new Ethernet switch that is aimed at data centers with 10 Gigabit Ethernet networking technologies and cloud computing environments. The EX8216 switch is part of Juniper’s EX8200 family of Ethernet switches that are designed to offer high performance, reliability and energy efficiency at a lower cost than rival products. Like other vendors, Juniper is looking to take away market share from Cisco by offering networking products with better performance at a lower cost.
Juniper Networks is unveiling a new network switch optimized for the growing presence of high-density 10 Gigabit Ethernet technology in data centers and for cloud computing environments.
The EX8216 Ethernet switch, announced May 12, is a 16-slot high-performance platform with fabric capacity of up to 12.4 terabits.
Juniper said the new switch is part of the EX8200 family of modular switches that deliver the wire-rate performance, low latency and carrier-class reliability enterprises need to consolidate network layers, which reduces complexity and capital and operational expenses throughout the data center.
3Com is returning to the enterprise networking space.
Overall, the EX8200 switches—with a per-slot capacity of 320 Gigabits per second and the ability to deliver up to 2 billion packets of data per second—are designed to give enterprises an easy migration path to future 100GbE networks. Currently, 10GbE is rapidly becoming a strong presence in data centers, driven by increased use of virtualization technology, greater infrastructure consolidation and the spread of network-intensive applications, such as Web 2.0 technologies and video streaming.
Hitesh Sheth, executive vice president and general manager of Juniper’s Ethernet Platforms Business Group, said the EX8216 switch delivers twice the performance and consumes a third less power than competing products.
"With the rest of Juniper's portfolio, the EX8216 enables new data center and cloud computing architectures that lower complexity, deliver increased functionality and reduce overall total cost of ownership through innovative system designs that can lower both capital and operating expenses," Sheth said in a statement.
Like others in the networking sector, Juniper is looking to chip away at Cisco Systems’ dominance in part by offering high-performance switches that cost less and use less power than those from Cisco.
In addition, Juniper is relying on its strategy of having a single operating system—Junos Software—for its networking portfolio, arguing that it helps reduce complexity in increasingly complex data centers.
Juniper Networks is unveiling a new network switch optimized for the growing presence of high-density 10 Gigabit Ethernet technology in data centers and for cloud computing environments.
The EX8216 Ethernet switch, announced May 12, is a 16-slot high-performance platform with fabric capacity of up to 12.4 terabits.
Juniper said the new switch is part of the EX8200 family of modular switches that deliver the wire-rate performance, low latency and carrier-class reliability enterprises need to consolidate network layers, which reduces complexity and capital and operational expenses throughout the data center.
3Com is returning to the enterprise networking space.
Overall, the EX8200 switches—with a per-slot capacity of 320 Gigabits per second and the ability to deliver up to 2 billion packets of data per second—are designed to give enterprises an easy migration path to future 100GbE networks. Currently, 10GbE is rapidly becoming a strong presence in data centers, driven by increased use of virtualization technology, greater infrastructure consolidation and the spread of network-intensive applications, such as Web 2.0 technologies and video streaming.
Hitesh Sheth, executive vice president and general manager of Juniper’s Ethernet Platforms Business Group, said the EX8216 switch delivers twice the performance and consumes a third less power than competing products.
"With the rest of Juniper's portfolio, the EX8216 enables new data center and cloud computing architectures that lower complexity, deliver increased functionality and reduce overall total cost of ownership through innovative system designs that can lower both capital and operating expenses," Sheth said in a statement.
Like others in the networking sector, Juniper is looking to chip away at Cisco Systems’ dominance in part by offering high-performance switches that cost less and use less power than those from Cisco.
In addition, Juniper is relying on its strategy of having a single operating system—Junos Software—for its networking portfolio, arguing that it helps reduce complexity in increasingly complex data centers.
Extreme Challenges Cisco, Juniper with Ethernet Modules
Extreme Networks is rolling out its BlackDiamond 8900-Series modules, which offer scalability and flexibility in 1GbE and 10GbE data center environments. Extreme officials say the modules offer better density, cost and energy efficiency than competing products from vendors such as Cisco and Juniper. Like companies such as HP, Juniper and 3Com, Extreme sees changes in the data center and economic pressures forcing enterprises to consider alternatives to Cisco.
Extreme Networks is putting the final touches on its Ethernet data center strategy with the rollout of the BlackDiamond 8900-Series modules.
The BlackDiamond 8900-Series modules for Extreme’s BlackDiamond Series 8800 switches include a 24-port 10 Gigabit Ethernet card, 96-port 1GbE card and 128/80 Gigabit-per-slot fabric.
The products were announced May 11 and will be demonstrated by Extreme at the Interop 2009 conference in Las Vegas May 17-21.
Company officials said the products come as changes continue to come in the data center and as IT departments under pressure from the global recession and increasing business demands begin looking for alternatives to dominant networking vendor Cisco Systems.
In the data center, virtualization, multicore processing and blade servers are ramping up the density of the computing environment, said Kevin Ryan, senior director of data center solutions for Extreme.
“Data center and enterprise customers are looking to get a lot more density in the data center,” Ryan said.
Where once IT administrators were putting anywhere from three to 10 virtual machines on a single physical server, now that number is climbing to as high as 50 VMs, he said. That type of density, plus the rapidly increasing amount of data that needs to be stored, and the growth of resource-intensive applications as high-definition video, unified communications and Web 2.0 social platforms are driving the demand for 10GbE in the data center.
Several analyst firms agree with that view. In March, Infonetics Research and Dell’Oro Group both issued reports that essentially said while the overall Ethernet switch market was down, the 10GbE switch space would still see positive revenue and port growth in 2009.
However, Ryan said the move to 10GbE was still in a transition phase, which is fueling demand for products that offer flexibility.
“Ten Gigabit Ethernet is taking off,” he said. “But there’s still a mix of 1 and 10G.”
Extreme is delivering that flexibility, Ryan said. The BlackDiamond 8900-Series modules offer 10GbE and 1GbE ports, and can scale up to 582 10GbE ports in a single rack. Overall, the 8800 Series switches offer a single and modular operating system in the ExtremeXOS and quick integration and automation of network environments through the company’s Universal Port.
In addition, the vendor has enabled XML, CLI (command-line interface) and SNMP (Simple Network Management Protocol) interfaces through its EPICenter management tool.
The 8900-Series modules, which can fit into Extreme’s 8806 or 8810 chasses, also offer energy efficiency though dynamic energy management features. For example, the modules can go into hibernation mode during off-peak hours, reducing power consumption by up to 70 percent.
Such capabilities are key differentiators for Extreme from its competitors, in particular Cisco, Ryan said. Cisco has multiple OSes for its myriad offerings, which “creates a lot of complexity that leads to a lot of problems,” he said.
Extreme isn’t the only networking vendor to sense some vulnerability in Cisco’s dominance, thanks in large part to the recession, as well as Cisco’s push into other areas of the data center, such as servers, through its Unified Computing System initiative.
Juniper Networks and Hewlett-Packard have been making strong plays in the space, and one-time network giant 3Com announced May 11 that it was making a push back into the global enterprise networking space on the strength of its H3C business in China.
Extreme’s Ryan said the recession is forcing enterprises to closely scrutinize their IT spending and to look for alternatives as ways to increase efficiencies and save money.
As a result, Extreme is finding new business opportunities, he said.
“We’re seeing that we’re being considered in [enterprise] proposals that maybe 18 months ago we wouldn’t be in,” Ryan said.
With its BlackDiamond platform, “we’ve been able make the performance [of its offerings] very high and keep the costs very low,” he said.
Extreme officials claim that their 8900-Series modules beat Cisco and other competitors in bandwidth per line card, 10GbE port density and cost of acquisition.
Extreme’s BlackDiamond 8900-Series modules will be available this quarter, starting at $24,995.
Extreme Networks is putting the final touches on its Ethernet data center strategy with the rollout of the BlackDiamond 8900-Series modules.
The BlackDiamond 8900-Series modules for Extreme’s BlackDiamond Series 8800 switches include a 24-port 10 Gigabit Ethernet card, 96-port 1GbE card and 128/80 Gigabit-per-slot fabric.
The products were announced May 11 and will be demonstrated by Extreme at the Interop 2009 conference in Las Vegas May 17-21.
Company officials said the products come as changes continue to come in the data center and as IT departments under pressure from the global recession and increasing business demands begin looking for alternatives to dominant networking vendor Cisco Systems.
In the data center, virtualization, multicore processing and blade servers are ramping up the density of the computing environment, said Kevin Ryan, senior director of data center solutions for Extreme.
“Data center and enterprise customers are looking to get a lot more density in the data center,” Ryan said.
Where once IT administrators were putting anywhere from three to 10 virtual machines on a single physical server, now that number is climbing to as high as 50 VMs, he said. That type of density, plus the rapidly increasing amount of data that needs to be stored, and the growth of resource-intensive applications as high-definition video, unified communications and Web 2.0 social platforms are driving the demand for 10GbE in the data center.
Several analyst firms agree with that view. In March, Infonetics Research and Dell’Oro Group both issued reports that essentially said while the overall Ethernet switch market was down, the 10GbE switch space would still see positive revenue and port growth in 2009.
However, Ryan said the move to 10GbE was still in a transition phase, which is fueling demand for products that offer flexibility.
“Ten Gigabit Ethernet is taking off,” he said. “But there’s still a mix of 1 and 10G.”
Extreme is delivering that flexibility, Ryan said. The BlackDiamond 8900-Series modules offer 10GbE and 1GbE ports, and can scale up to 582 10GbE ports in a single rack. Overall, the 8800 Series switches offer a single and modular operating system in the ExtremeXOS and quick integration and automation of network environments through the company’s Universal Port.
In addition, the vendor has enabled XML, CLI (command-line interface) and SNMP (Simple Network Management Protocol) interfaces through its EPICenter management tool.
The 8900-Series modules, which can fit into Extreme’s 8806 or 8810 chasses, also offer energy efficiency though dynamic energy management features. For example, the modules can go into hibernation mode during off-peak hours, reducing power consumption by up to 70 percent.
Such capabilities are key differentiators for Extreme from its competitors, in particular Cisco, Ryan said. Cisco has multiple OSes for its myriad offerings, which “creates a lot of complexity that leads to a lot of problems,” he said.
Extreme isn’t the only networking vendor to sense some vulnerability in Cisco’s dominance, thanks in large part to the recession, as well as Cisco’s push into other areas of the data center, such as servers, through its Unified Computing System initiative.
Juniper Networks and Hewlett-Packard have been making strong plays in the space, and one-time network giant 3Com announced May 11 that it was making a push back into the global enterprise networking space on the strength of its H3C business in China.
Extreme’s Ryan said the recession is forcing enterprises to closely scrutinize their IT spending and to look for alternatives as ways to increase efficiencies and save money.
As a result, Extreme is finding new business opportunities, he said.
“We’re seeing that we’re being considered in [enterprise] proposals that maybe 18 months ago we wouldn’t be in,” Ryan said.
With its BlackDiamond platform, “we’ve been able make the performance [of its offerings] very high and keep the costs very low,” he said.
Extreme officials claim that their 8900-Series modules beat Cisco and other competitors in bandwidth per line card, 10GbE port density and cost of acquisition.
Extreme’s BlackDiamond 8900-Series modules will be available this quarter, starting at $24,995.
Monday, May 11, 2009
3Tera, Tap In Systems Partner for Cloud Computing Monitoring
3Tera is integrating Tap In Systems’ Cloud Management Service into its own AppLogic cloud computing platform, a move both companies say will give enterprises the same level of monitoring for their cloud environments that they are used to in their data centers. Tap In Systems’ CMS will be presented as a virtual appliance within 3Tera’s AppLogic platform.
Cloud computing and utility computing vendor 3Tera is adding greater management capabilities to its AppLogic platform through a partnership with Tap In Systems.
3Tera is integrating Tap In Systems’ CMS (Cloud Management Service) into the standard AppLogic catalog of virtual appliances, which will enable AppLogic users to enhance their monitoring capabilities for critical alerts, system utilization and notification on AppLogic-based applications.
“This solution blends together the best of both worlds—a turnkey cloud computing platform and enterprise-class cloud monitoring—to create a powerful solution,” Peter Loh, Tap In Systems' founder and CEO, said in a statement.
With the integration, 3Tera’s AppLogic platform will feed metadata about applications into Tap In Systems’ CMS to give users an up-to-date view of the applications. In addition, users will be able to get notifications and alerts for critical events, system utilization and performance issues for applications, and statistics and root-cause analyses simplify the process of setting up the alerts and notifications. Customers can also remotely monitor multiple applications and compute clouds, as well as on-premises systems, using the same interface.
Running on the AppLogic platform also increases the availability of Tap In Systems’ CMS.
“Enterprise users adopting cloud computing expect the same level of monitoring capability they can achieve in their own data centers,” Bert Armijo, senior vice president of sales, marketing and product management for 3Tera, said in a statement, adding that the partnership with Tap In Systems provides that.
Cloud computing and utility computing vendor 3Tera is adding greater management capabilities to its AppLogic platform through a partnership with Tap In Systems.
3Tera is integrating Tap In Systems’ CMS (Cloud Management Service) into the standard AppLogic catalog of virtual appliances, which will enable AppLogic users to enhance their monitoring capabilities for critical alerts, system utilization and notification on AppLogic-based applications.
“This solution blends together the best of both worlds—a turnkey cloud computing platform and enterprise-class cloud monitoring—to create a powerful solution,” Peter Loh, Tap In Systems' founder and CEO, said in a statement.
With the integration, 3Tera’s AppLogic platform will feed metadata about applications into Tap In Systems’ CMS to give users an up-to-date view of the applications. In addition, users will be able to get notifications and alerts for critical events, system utilization and performance issues for applications, and statistics and root-cause analyses simplify the process of setting up the alerts and notifications. Customers can also remotely monitor multiple applications and compute clouds, as well as on-premises systems, using the same interface.
Running on the AppLogic platform also increases the availability of Tap In Systems’ CMS.
“Enterprise users adopting cloud computing expect the same level of monitoring capability they can achieve in their own data centers,” Bert Armijo, senior vice president of sales, marketing and product management for 3Tera, said in a statement, adding that the partnership with Tap In Systems provides that.
Citrix Goes Do-It-Yourself with New Products, Services
Topping the new arrivals at Synergy 2009 is Dazzle, an online application store for developers, designed along the iTunes model; Citrix Receiver, a new iPod app that enables users to administer enterprise applications from anywhere; and an upgraded Citrix Essentials virtualization management package for its own XenServer hypervisor and for Microsoft's Hyper-V.
Desktop and data center virtualization software provider Citrix moved full force into do-it-yourself mode May 5, launching a list of new self-service-type products and services at its annual Synergy 2009 users conference in Las Vegas.
The software and services—which also include some upgrades of existing products—ran the full scope of markets, from the enterprise to SMB to the home user.
Topping the new arrivals is Dazzle, an online application store for developers, designed along the iTunes model; Citrix Receiver, a new iPod app that enables users to administer enterprise applications from anywhere; and a major upgrade to Citrix's Essentials virtualization management package for XenServer hypervisor and Microsoft's Hyper-V.
"Dazzle is really the first online store for enterprise applications," Wes Wasson, Citrix senior vice president and chief marketing officer, told eWEEK. "It's just like iTunes; everybody knows how to use iTunes.
"If you look at the state of enterprise computing today, it's collapsing under its own weight of complexity. One of the biggest answers to this is just embracing consumerism and moving toward self-service, on-demand. That's what we're aiming at with all these new products and services," Wasson said.
Dazzle, to be made available later this year, is a freely downloadable interface tool that enables users to find and utilize enterprise applications. Dazzle installs in front of existing delivery infrastructures and works with current Citrix enterprise products such as XenApp and XenDesktop to bring an "intuitive user experience that requires no training, similar to how Apple iTunes works," Wasson said.
Using Dazzle, anybody on any computer can browse and search for whatever application they need based on name, description or type. The applications can be selected, stored and organized into custom lists.
For compliance and legal purposes, Dazzle can be programmed to send a message to an IT manager to authorize the use of licensed applications.
Versatile New Virtual Client
Citrix Receiver, a new virtual software client, is a versatile new product that enables enterprise IT to deliver desktops and applications as an on-demand service to any device in any location, Wasson said. Receiver runs in the background of a virtual desktop and improves the ability of Citrix to update applications by pushing out the changes automatically, he said.
"When there is an update to a XenApps, for instance, we can push it directly to all users who have Citrix Receiver," he said. "They don't even have to think about it."
Receiver has a version that runs on the Apple iPhone, Wasson said, which enables standard enterprise applications like Microsoft PowerPoint and Word to work on it, Wasson said.
Using Receiver, enterprise staff members who need to access their work desktops from any location can go to a URL given to them by IT. The IT manager, however, maintains control over the employee's workspace for security purposes.
Finally, outside developers can use Receiver to test their applications to run on various devices, Wasson said. This eliminates the need to build, test and support specific software clients for each type of device.
Citrix also released an upgrade to its Citrix Essentials virtualization management package for its own XenServer hypervisor and for Microsoft's Hyper-V.
The new 5.5 version of Citrix Essentials features expanded data storage integration, automated storage management, dynamic workload balancing and Active Directory integration. It also offers an enhanced search feature, which allows for search by VM name, resource pool, location, server, storage repository, snapshot time and network name—all from a single location.
Desktop and data center virtualization software provider Citrix moved full force into do-it-yourself mode May 5, launching a list of new self-service-type products and services at its annual Synergy 2009 users conference in Las Vegas.
The software and services—which also include some upgrades of existing products—ran the full scope of markets, from the enterprise to SMB to the home user.
Topping the new arrivals is Dazzle, an online application store for developers, designed along the iTunes model; Citrix Receiver, a new iPod app that enables users to administer enterprise applications from anywhere; and a major upgrade to Citrix's Essentials virtualization management package for XenServer hypervisor and Microsoft's Hyper-V.
"Dazzle is really the first online store for enterprise applications," Wes Wasson, Citrix senior vice president and chief marketing officer, told eWEEK. "It's just like iTunes; everybody knows how to use iTunes.
"If you look at the state of enterprise computing today, it's collapsing under its own weight of complexity. One of the biggest answers to this is just embracing consumerism and moving toward self-service, on-demand. That's what we're aiming at with all these new products and services," Wasson said.
Dazzle, to be made available later this year, is a freely downloadable interface tool that enables users to find and utilize enterprise applications. Dazzle installs in front of existing delivery infrastructures and works with current Citrix enterprise products such as XenApp and XenDesktop to bring an "intuitive user experience that requires no training, similar to how Apple iTunes works," Wasson said.
Using Dazzle, anybody on any computer can browse and search for whatever application they need based on name, description or type. The applications can be selected, stored and organized into custom lists.
For compliance and legal purposes, Dazzle can be programmed to send a message to an IT manager to authorize the use of licensed applications.
Versatile New Virtual Client
Citrix Receiver, a new virtual software client, is a versatile new product that enables enterprise IT to deliver desktops and applications as an on-demand service to any device in any location, Wasson said. Receiver runs in the background of a virtual desktop and improves the ability of Citrix to update applications by pushing out the changes automatically, he said.
"When there is an update to a XenApps, for instance, we can push it directly to all users who have Citrix Receiver," he said. "They don't even have to think about it."
Receiver has a version that runs on the Apple iPhone, Wasson said, which enables standard enterprise applications like Microsoft PowerPoint and Word to work on it, Wasson said.
Using Receiver, enterprise staff members who need to access their work desktops from any location can go to a URL given to them by IT. The IT manager, however, maintains control over the employee's workspace for security purposes.
Finally, outside developers can use Receiver to test their applications to run on various devices, Wasson said. This eliminates the need to build, test and support specific software clients for each type of device.
Citrix also released an upgrade to its Citrix Essentials virtualization management package for its own XenServer hypervisor and for Microsoft's Hyper-V.
The new 5.5 version of Citrix Essentials features expanded data storage integration, automated storage management, dynamic workload balancing and Active Directory integration. It also offers an enhanced search feature, which allows for search by VM name, resource pool, location, server, storage repository, snapshot time and network name—all from a single location.
Citrix Virtualizes Its NetScaler App Server
In the past, the NetScaler physical appliance has been deployed to house large-scale Web applications -- such as those used for financial services and scientifics -- and needed purpose-built equipment that required specialized networking expertise to deploy and manage. With the new NetScaler VPX software, this functionality is now available to users in a form that can be downloaded from the Web and run on any standard x86-based server.
Virtualization software provider Citrix Systems on May 5 launched a new virtual version of its NetScaler MPX hardware server appliance, called NetScaler VPX.
The company made the announcement at its Synergy 2009 conference in Las Vegas.
In the past, the NetScaler physical server has been deployed to house large-scale Web applications—such as those used for financial services and scientifics—and needed purpose-built equipment that required specialized networking expertise to deploy and manage.
With the new NetScaler VPX software, this functionality is now available to users in a much easier-to-use, low-cost form that can be downloaded from the Web and run on any standard x86-based server.
Virtualizing the NetScaler platform opens up a wide range of new dynamic, on-demand deployment options, for both enterprise customers and cloud service providers, Wes Wasson, Citrix's senior vice president and chief marketing officer, told eWEEK.
"This does a couple of things: It makes it more accessible, you can run smaller applications, you can do multitenancy, and it's less expensive," Wasson said. "Essentially, you can now do all the things that traditional networking guys who use big proprietary hardware-backplane kinds of environments are doing. It just doesn't seem like that's the way the world is going.
"It [VPX] also helps create a very dynamic, Web-delivery fabric, where you've got hardware NetScaler systems running up front, and VPXes running in a SAN storage environment that are connected with the application workload, as demand goes up and down. This is going to break open a lot of new ground here."
Software-based application delivery controllers (known as "soft ADCs") such as VPX are becoming a trend of sorts. SoftADCs make it easier for IT organizations to apply established application deployment processes for tasks such as provisioning, charge-back and automation to their application delivery infrastructure.
"SoftADCs will be an important component of the ADC market, serving customers from small to midsize enterprises, where price is critical, to large enterprise and cloud data centers, where flexibility and agility are paramount," said Gartner Research Vice President Joe Skorupa. "By 2013, softADCs will account for nearly one-third of all ADC units shipped."
NetScaler VPX will be available for public tech preview on May 18 and can be downloaded here.
NetScaler VPX will be available for general commercial purchase in the third quarter of 2009. Pricing for the new VPX versions of these appliances is not being announced at this time, Citrix said.
Virtualization software provider Citrix Systems on May 5 launched a new virtual version of its NetScaler MPX hardware server appliance, called NetScaler VPX.
The company made the announcement at its Synergy 2009 conference in Las Vegas.
In the past, the NetScaler physical server has been deployed to house large-scale Web applications—such as those used for financial services and scientifics—and needed purpose-built equipment that required specialized networking expertise to deploy and manage.
With the new NetScaler VPX software, this functionality is now available to users in a much easier-to-use, low-cost form that can be downloaded from the Web and run on any standard x86-based server.
Virtualizing the NetScaler platform opens up a wide range of new dynamic, on-demand deployment options, for both enterprise customers and cloud service providers, Wes Wasson, Citrix's senior vice president and chief marketing officer, told eWEEK.
"This does a couple of things: It makes it more accessible, you can run smaller applications, you can do multitenancy, and it's less expensive," Wasson said. "Essentially, you can now do all the things that traditional networking guys who use big proprietary hardware-backplane kinds of environments are doing. It just doesn't seem like that's the way the world is going.
"It [VPX] also helps create a very dynamic, Web-delivery fabric, where you've got hardware NetScaler systems running up front, and VPXes running in a SAN storage environment that are connected with the application workload, as demand goes up and down. This is going to break open a lot of new ground here."
Software-based application delivery controllers (known as "soft ADCs") such as VPX are becoming a trend of sorts. SoftADCs make it easier for IT organizations to apply established application deployment processes for tasks such as provisioning, charge-back and automation to their application delivery infrastructure.
"SoftADCs will be an important component of the ADC market, serving customers from small to midsize enterprises, where price is critical, to large enterprise and cloud data centers, where flexibility and agility are paramount," said Gartner Research Vice President Joe Skorupa. "By 2013, softADCs will account for nearly one-third of all ADC units shipped."
NetScaler VPX will be available for public tech preview on May 18 and can be downloaded here.
NetScaler VPX will be available for general commercial purchase in the third quarter of 2009. Pricing for the new VPX versions of these appliances is not being announced at this time, Citrix said.
Hyperformix Eases Virtualization Capacity Planning
Hyperformix is expanding the capabilities of its Capacity Manager and Data Manager software products to better support virtualized environments. The software offerings are designed to help IT administrators map out the performance and capacity needs of their virtualization initiatives, and also can enable them to better figure out such business problems as server consolidation and application upgrades. The software supports a wide range of virtualization technologies from such vendors as VMware, Microsoft, Citrix, Sun, HP and IBM.
Hyperformix wants to make it easier for businesses to plan out their virtual environments.
Hyperformix is making enhancements to its Capacity Manager and Data Manager offerings that are designed to not only enable enterprises to more effectively map out the performance and capacity needs of their virtualization initiatives, but also to budget for the IT support that will be needed, find ways to reduce costs and extend the value of their current infrastructure.
“Our customers look to us to help them accurately plan and communicate what it will take to support business services in IT, and where cost-saving opportunities exist,” Bruce Milne, vice president of products and marketing for Hyperformix, said in a statement.
Capacity Manager 4.0 and Data Manager 3.1, announced May 5, can automatically identify underutilized virtual machines and systems that can be safely consolidated, according to Hyperformix officials.
The software also offers automated dashboards and reporting capabilities that enable IT administrators to more easily convey complex data to business users and identify cost-saving opportunities through the collection of data on such areas as hardware costs and power consumption.
In addition, the software products support virtualization technology from such vendors as VMware, Microsoft, Citrix Systems, Sun Microsystems, Hewlett-Packard and IBM, as well as modeling hardware, operating systems and other components.
Hyperformix also offers solutions kits designed to help IT administrators and business users figure out such issues as server consolidation and application upgrades.
The software can be downloaded immediately here
Hyperformix wants to make it easier for businesses to plan out their virtual environments.
Hyperformix is making enhancements to its Capacity Manager and Data Manager offerings that are designed to not only enable enterprises to more effectively map out the performance and capacity needs of their virtualization initiatives, but also to budget for the IT support that will be needed, find ways to reduce costs and extend the value of their current infrastructure.
“Our customers look to us to help them accurately plan and communicate what it will take to support business services in IT, and where cost-saving opportunities exist,” Bruce Milne, vice president of products and marketing for Hyperformix, said in a statement.
Capacity Manager 4.0 and Data Manager 3.1, announced May 5, can automatically identify underutilized virtual machines and systems that can be safely consolidated, according to Hyperformix officials.
The software also offers automated dashboards and reporting capabilities that enable IT administrators to more easily convey complex data to business users and identify cost-saving opportunities through the collection of data on such areas as hardware costs and power consumption.
In addition, the software products support virtualization technology from such vendors as VMware, Microsoft, Citrix Systems, Sun Microsystems, Hewlett-Packard and IBM, as well as modeling hardware, operating systems and other components.
Hyperformix also offers solutions kits designed to help IT administrators and business users figure out such issues as server consolidation and application upgrades.
The software can be downloaded immediately here
HyTrust Looks to Build Community Around Virtualization
HyTrust, which launched as a company in early April with the Enterprise Edition of its namesake virtualization management appliance, is rolling out a free Community Edition aimed at SMBs. At the same time, HyTrust also is making a push to create a community around its technology to enable information sharing among users and to speed up its own product development. HyTrust’s technology currently manages VMware environments, though support of Xen and Microsoft’s Hyper-V is on the way.
A month after launching the company with a policy-based management appliance for virtualized environments, officials at HyTrust are now looking to build a community among its customers.
HyTrust May 5 announced that it is releasing a free community edition of its namesake appliance, aimed at giving SMBs a cost-effective way to get into virtualization and cloud computing. The HyTrust Appliance Community Edition, which is designed to give users a central control point for managing and monitoring virtualized environments, also is a tool that enterprises can use to get started, according to HyTrust officials.
The free community version offers the same functionality and features as HyTrust’s Enterprise Edition, but with limitations. For example, users can only have three protected hypervisor hosts.
At the same time, HyTrust is kicking off an online community designed to support its vision of a more easily managed virtualized environment, to create a repository that enables users to share information and give feedback to the company, and to help HyTrust direct its R&D efforts.
The Community Edition lets larger enterprises easily evaluate the capabilities of HyTrust’s technology, and give feedback on their findings. In turn, HyTrust will be able to speed up product development and innovations, officials said.
“The potential for the HyTrust Community is unbounded,” HyTrust CEO Eric Chiu said in a statement. “We see this not only as a terrific opportunity for HyTrust to meet currently unmet needs of the market, but also as a great way for HyTrust to harness the powers of distributed peer review.”
The Community Edition is available to members of the HyTrust Community. To join, click here.
HyTrust launched the company April 7 with the Enterprise Edition, which can be bought as a 1U appliance or as software that can run on the customer’s hardware.
HyTrust currently can manage VMware environments, though it will expand its reach to the Xen hypervisor from Citrix Systems later in the year, he said. The company also is working on products to support infrastructures using Microsoft's Hyper-V technology.
A month after launching the company with a policy-based management appliance for virtualized environments, officials at HyTrust are now looking to build a community among its customers.
HyTrust May 5 announced that it is releasing a free community edition of its namesake appliance, aimed at giving SMBs a cost-effective way to get into virtualization and cloud computing. The HyTrust Appliance Community Edition, which is designed to give users a central control point for managing and monitoring virtualized environments, also is a tool that enterprises can use to get started, according to HyTrust officials.
The free community version offers the same functionality and features as HyTrust’s Enterprise Edition, but with limitations. For example, users can only have three protected hypervisor hosts.
At the same time, HyTrust is kicking off an online community designed to support its vision of a more easily managed virtualized environment, to create a repository that enables users to share information and give feedback to the company, and to help HyTrust direct its R&D efforts.
The Community Edition lets larger enterprises easily evaluate the capabilities of HyTrust’s technology, and give feedback on their findings. In turn, HyTrust will be able to speed up product development and innovations, officials said.
“The potential for the HyTrust Community is unbounded,” HyTrust CEO Eric Chiu said in a statement. “We see this not only as a terrific opportunity for HyTrust to meet currently unmet needs of the market, but also as a great way for HyTrust to harness the powers of distributed peer review.”
The Community Edition is available to members of the HyTrust Community. To join, click here.
HyTrust launched the company April 7 with the Enterprise Edition, which can be bought as a 1U appliance or as software that can run on the customer’s hardware.
HyTrust currently can manage VMware environments, though it will expand its reach to the Xen hypervisor from Citrix Systems later in the year, he said. The company also is working on products to support infrastructures using Microsoft's Hyper-V technology.
Cybernetics miSAN D iSCSI SAN Meets Basic Storage Needs at a Good Price
REVIEW: With miSAN D, Cybernetics establishes the right balance of price and features for the price-conscious iSCSI SAN shopper. Where other SAN manufacturers pack innumerable features into their products, Cybernetics focuses on providing only those features you’re likely to use: volume snapshots, internal RAID, full device redundancy and device-to-device replication.
If your data center is like most data centers, then you’re probably constantly shopping for storage systems and storage upgrades. Although the majority of businesses are slicing their IT budgets this year, recent surveys by Forrester have shown that storage (and security, to a lesser extent) spending is still growing. How can you balance growing storage needs with decreasing budgets, especially when you require a high-performance and fault-tolerant multiterabyte SAN solution?
The first way to narrow your storage area network search if you are price-conscious is to focus on iSCSI products. Sure, you’ll have to give up performance, but the overall savings (on drive arrays, network switches and personnel) provided by IP-based iSCSI look pretty attractive this year. In addition, using SATA (Serial ATA) rather than SAS (serial-attached SCSI) or SCSI drives also keeps the price down. Depending on your usage characteristics, you might not benefit from using the more scalable drive technologies, anyway.
Cybernetics goes to great lengths to establish the right balance of price and features for the price-conscious iSCSI SAN shopper. We’ve all heard about the 80/20 rule: 80 percent of users only need 20 percent of a typical product’s features. Cybernetics takes this to heart with the miSAN D. Where other SAN manufacturers, such as Xiotech, pack innumerable features into their products, Cybernetics focuses on providing only those features you’re likely to use: volume snapshots, internal RAID, full device redundancy and device-to-device replication. Then they add a few valuable features to the mix, such as integrated agentless backup and complimentary tech support. The price for the units as tested is about $16,000.
The test units arrived at the lab in excellent condition and boxed very well in purely recyclable packaging.
Installation could not have been easier. There is a helpful sticker with a map of available ports on the top of each unit. From each unit, I connected two 1G-bps Ethernet ports to a switch for data transmission and one 1G-bps Ethernet port to a separate switch for management. I then connected the two devices directly to each other for failover heartbeat using two 1G-bps Ethernet links. I fired up a Web browser from my management workstation, pointed it at the default management IP address, logged in and began configuring.
The streamlined browser-based management GUI very easy to navigate and use. However, there were two things that disappointed. First, I was not forced to change the default login credentials. This is not the end of the world, because this is not an externally facing system and is therefore unlikely to be attacked. However, as a security guy, this is something I notice. Second, there was a complete lack of help within the management GUI. The units did arrive with CDs containing complete documentation in PDF format, and Cybernetics did everything to keep complexity down, so this is forgivable. But it’s still worth mentioning.
There is only one management account for the entire unit. This is OK in situations in which there is only one storage administrator, but organizations with multiple storage admins will be disappointed by the lack of multiple admin accounts and the accompanying lack of an audit trail for each admin. Likewise, reporting is very basic—pretty much limited to whether the unit is on or not and how much data has been written and read during the last one, three or seven days. All other usage statistics must be obtained through the connected servers’ operating system.
The first thing I did during tests was configure the two units for failover. I designated one system the master and one the slave, then indicated that failover should happen instantly upon fault detection. (Other choices include after 5 or 30 seconds.) I subsequently verified that failover worked properly: When I pulled the plug on the master, the slave became the new master in milliseconds.
I easily created virtual disks and exposed them to my test server. Each virtual disk can have its own snapshot policy or follow the global snapshot policy. I scheduled snapshots to occur at regular intervals. (Options include every X minutes in 15-minute increments, or at a specific day and time.)
The miSAN D excels at built-in archive and backup functionality. Volumes can be configured to replicate snapshots on a regular schedule to other units across WAN links for business continuity and disaster recovery purposes. Individual snapshots can be copied onto media connected directly to the unit via USB. A distinctive feature is that snapshots can be migrated to tape almost transparently. I connected a Cybernetics CY-L881 tape drive to an external SCSI port on the master unit and configured regular backup jobs using the management GUI in minutes.
I measured performance using Iometer 2006.07.07 on a Lenovo RD120 running Windows Server 2003 EE with the Microsoft iSCSI initiator. The server had two 1G-bps NICs, so I was able to use the iSCSI initiator’s MPIO feature to round-robin load balance traffic to increase throughput.
I saw a huge variation in results depending on the block size I used during performance testing. When the miSAN D was able to handle all the Iometer traffic in cache (it has a 4GB read/write cache, which is unusually high for this class of device), performance approached the range of 250 to 300MB per second.
With the miSAN D configured for RAID 0, I launched three Iometer threads and ramped up three threads at a time to a maximum of 24 threads, performed a 50/50 percent sequential/random 33/67 percent write/read mix with a 64KB workload size. Performance peaked at 1072 IOps and 67MB per second, at which point average response time was 42 ms. When I reconfigured the unit for RAID 5 (and a spare drive) and duplicated the test, performance peaked at 1,092 IOps and 68MBps. At this point, average response time was just over 18 ms. This is adequate when the drive array is utilized and excellent performance when data can be cached.
If your data center is like most data centers, then you’re probably constantly shopping for storage systems and storage upgrades. Although the majority of businesses are slicing their IT budgets this year, recent surveys by Forrester have shown that storage (and security, to a lesser extent) spending is still growing. How can you balance growing storage needs with decreasing budgets, especially when you require a high-performance and fault-tolerant multiterabyte SAN solution?
The first way to narrow your storage area network search if you are price-conscious is to focus on iSCSI products. Sure, you’ll have to give up performance, but the overall savings (on drive arrays, network switches and personnel) provided by IP-based iSCSI look pretty attractive this year. In addition, using SATA (Serial ATA) rather than SAS (serial-attached SCSI) or SCSI drives also keeps the price down. Depending on your usage characteristics, you might not benefit from using the more scalable drive technologies, anyway.
Cybernetics goes to great lengths to establish the right balance of price and features for the price-conscious iSCSI SAN shopper. We’ve all heard about the 80/20 rule: 80 percent of users only need 20 percent of a typical product’s features. Cybernetics takes this to heart with the miSAN D. Where other SAN manufacturers, such as Xiotech, pack innumerable features into their products, Cybernetics focuses on providing only those features you’re likely to use: volume snapshots, internal RAID, full device redundancy and device-to-device replication. Then they add a few valuable features to the mix, such as integrated agentless backup and complimentary tech support. The price for the units as tested is about $16,000.
The test units arrived at the lab in excellent condition and boxed very well in purely recyclable packaging.
Installation could not have been easier. There is a helpful sticker with a map of available ports on the top of each unit. From each unit, I connected two 1G-bps Ethernet ports to a switch for data transmission and one 1G-bps Ethernet port to a separate switch for management. I then connected the two devices directly to each other for failover heartbeat using two 1G-bps Ethernet links. I fired up a Web browser from my management workstation, pointed it at the default management IP address, logged in and began configuring.
The streamlined browser-based management GUI very easy to navigate and use. However, there were two things that disappointed. First, I was not forced to change the default login credentials. This is not the end of the world, because this is not an externally facing system and is therefore unlikely to be attacked. However, as a security guy, this is something I notice. Second, there was a complete lack of help within the management GUI. The units did arrive with CDs containing complete documentation in PDF format, and Cybernetics did everything to keep complexity down, so this is forgivable. But it’s still worth mentioning.
There is only one management account for the entire unit. This is OK in situations in which there is only one storage administrator, but organizations with multiple storage admins will be disappointed by the lack of multiple admin accounts and the accompanying lack of an audit trail for each admin. Likewise, reporting is very basic—pretty much limited to whether the unit is on or not and how much data has been written and read during the last one, three or seven days. All other usage statistics must be obtained through the connected servers’ operating system.
The first thing I did during tests was configure the two units for failover. I designated one system the master and one the slave, then indicated that failover should happen instantly upon fault detection. (Other choices include after 5 or 30 seconds.) I subsequently verified that failover worked properly: When I pulled the plug on the master, the slave became the new master in milliseconds.
I easily created virtual disks and exposed them to my test server. Each virtual disk can have its own snapshot policy or follow the global snapshot policy. I scheduled snapshots to occur at regular intervals. (Options include every X minutes in 15-minute increments, or at a specific day and time.)
The miSAN D excels at built-in archive and backup functionality. Volumes can be configured to replicate snapshots on a regular schedule to other units across WAN links for business continuity and disaster recovery purposes. Individual snapshots can be copied onto media connected directly to the unit via USB. A distinctive feature is that snapshots can be migrated to tape almost transparently. I connected a Cybernetics CY-L881 tape drive to an external SCSI port on the master unit and configured regular backup jobs using the management GUI in minutes.
I measured performance using Iometer 2006.07.07 on a Lenovo RD120 running Windows Server 2003 EE with the Microsoft iSCSI initiator. The server had two 1G-bps NICs, so I was able to use the iSCSI initiator’s MPIO feature to round-robin load balance traffic to increase throughput.
I saw a huge variation in results depending on the block size I used during performance testing. When the miSAN D was able to handle all the Iometer traffic in cache (it has a 4GB read/write cache, which is unusually high for this class of device), performance approached the range of 250 to 300MB per second.
With the miSAN D configured for RAID 0, I launched three Iometer threads and ramped up three threads at a time to a maximum of 24 threads, performed a 50/50 percent sequential/random 33/67 percent write/read mix with a 64KB workload size. Performance peaked at 1072 IOps and 67MB per second, at which point average response time was 42 ms. When I reconfigured the unit for RAID 5 (and a spare drive) and duplicated the test, performance peaked at 1,092 IOps and 68MBps. At this point, average response time was just over 18 ms. This is adequate when the drive array is utilized and excellent performance when data can be cached.
Intel, AMD Tout Windows 7 Compatibility
Questions over which Intel chips support the XP Mode feature in Microsoft's upcoming Windows 7 operating system lead to a rare debate over processor-operating system compatibility. Engineers with AMD and Intel work closely on an ongoing basis with those from Microsoft, so that by the time an OS is ready to ship, there are few, if any compatibility issues between the chip and software, the companies said.
The current question of which Intel processors support the XP Mode feature in Microsoft's upcoming Windows 7 operating system shines a light on a relatively rare issue involving processor-OS compatibility.
Windows 7's XP Mode will let users run Windows XP-based applications in the Enterprise, Professional and Ultimate versions of the new operating system. The offering is another incentive for users to migrate from their older Windows operating systems to Windows 7, which is due out in early 2010, although many industry observers expect it to launch in 2009 before the holiday season.
John Spooner, an analyst with Technology Business Research, said XP Mode is the perfect example of how virtualization technology can be used in desktops and laptops.
"That is what's great about desktop virtualization," Spooner said. "In this case, it lets you run your older [XP] applications on Windows 7."
The issue that has come up in recent days is that not all Intel chips—or those from Advanced Micro Devices, for that matter—offer the hardware-based virtualization technology that is needed to take advantage of XP Mode.
This will particularly hit consumers, who are more likely than businesses to buy laptops and desktops powered by lower-end processors, which tend not to have the chip makers' virtualization technology. Not putting in the virtualization technology enables chip makers to keep down the cost of those low-end processors, Spooner said.
Almost all AMD chips except those in the low-end Sempron line offer the AMD-V virtualization technology, said Margaret Lewis, director of commercial solutions at AMD.
As for Intel, most of its enterprise-level processors offer Intel VT virtualization capabilities, said spokesperson George Alfs. Intel introduced the technology in 2005 and has shipped more than 100 million chips with the feature since then.
In a statement, Intel said, "Windows XP Mode is targeted for business customers. It is available on the mid- to higher-end versions of Windows 7 and is supported in hardware by many Intel processors. Intel vPro technology PCs are required to have an Intel VT-capable CPU and Intel VT-capable BIOS. They are the best platforms for testing and deploying Microsoft Windows Virtual PC and Windows XP Mode."
Alfs said to find out which features are in which processors, users can click here. Users who want to test their own systems can go here.
The AMD and Intel virtualization technologies are BIOS settings, and AMD's Lewis said many system makers ship their machines with the feature turned off.
While the issue of whether a chip has the virtualization technology to support XP Mode is primarily a consumer problem, and can be determined quickly enough, determining whether the virtualization capability is turned on or off could become a headache for some businesses, Spooner said.
"For a lot of systems, [making that determination] is going to require a desktop visit [by an IT staff member], which can be expensive," he said. "But I'm not sure if there is a remote way to figure that out."
Lewis said the issue highlights that area where "hardware and software really touch," and where chip makers work closely with Microsoft to ensure compatibility.
Both she and Alfs said compatibility with Windows is rarely an issue. AMD and Intel engineers work closely on an ongoing basis with their Microsoft counterparts not only as the operating system is being developed, but also as the chip makers lay out plans for future processors and architectures, they said.
Alfs said Microsoft builds its Windows OS on the x86 architecture, and the long beta testing cycle the software maker undertakes ensures close compatibility with Intel hardware designs. In addition, Intel gives Microsoft a long view of its product development plans. For example, Intel engineers already are sharing information with Microsoft about "Sandy Bridge," the chip architecture that will replace "Nehalem" sometime in 2010 and will offer such features as on-chip graphics technology and the AVX instruction set.
"These are chips and platforms that are not even on the market yet," Alfs said.
Lewis said in most cases, the hardware is given to software makers, who then ensure that their offerings are compatible. For example, Windows 7 will take advantage of AMD's RVI (Rapid Virtualization Indexing), which enables better management when hypervisors, a guest OS and applications are involved, she said.
"The hardware is presented to the software [makers], and then they put their magic into it," Lewis said.
The current question of which Intel processors support the XP Mode feature in Microsoft's upcoming Windows 7 operating system shines a light on a relatively rare issue involving processor-OS compatibility.
Windows 7's XP Mode will let users run Windows XP-based applications in the Enterprise, Professional and Ultimate versions of the new operating system. The offering is another incentive for users to migrate from their older Windows operating systems to Windows 7, which is due out in early 2010, although many industry observers expect it to launch in 2009 before the holiday season.
John Spooner, an analyst with Technology Business Research, said XP Mode is the perfect example of how virtualization technology can be used in desktops and laptops.
"That is what's great about desktop virtualization," Spooner said. "In this case, it lets you run your older [XP] applications on Windows 7."
The issue that has come up in recent days is that not all Intel chips—or those from Advanced Micro Devices, for that matter—offer the hardware-based virtualization technology that is needed to take advantage of XP Mode.
This will particularly hit consumers, who are more likely than businesses to buy laptops and desktops powered by lower-end processors, which tend not to have the chip makers' virtualization technology. Not putting in the virtualization technology enables chip makers to keep down the cost of those low-end processors, Spooner said.
Almost all AMD chips except those in the low-end Sempron line offer the AMD-V virtualization technology, said Margaret Lewis, director of commercial solutions at AMD.
As for Intel, most of its enterprise-level processors offer Intel VT virtualization capabilities, said spokesperson George Alfs. Intel introduced the technology in 2005 and has shipped more than 100 million chips with the feature since then.
In a statement, Intel said, "Windows XP Mode is targeted for business customers. It is available on the mid- to higher-end versions of Windows 7 and is supported in hardware by many Intel processors. Intel vPro technology PCs are required to have an Intel VT-capable CPU and Intel VT-capable BIOS. They are the best platforms for testing and deploying Microsoft Windows Virtual PC and Windows XP Mode."
Alfs said to find out which features are in which processors, users can click here. Users who want to test their own systems can go here.
The AMD and Intel virtualization technologies are BIOS settings, and AMD's Lewis said many system makers ship their machines with the feature turned off.
While the issue of whether a chip has the virtualization technology to support XP Mode is primarily a consumer problem, and can be determined quickly enough, determining whether the virtualization capability is turned on or off could become a headache for some businesses, Spooner said.
"For a lot of systems, [making that determination] is going to require a desktop visit [by an IT staff member], which can be expensive," he said. "But I'm not sure if there is a remote way to figure that out."
Lewis said the issue highlights that area where "hardware and software really touch," and where chip makers work closely with Microsoft to ensure compatibility.
Both she and Alfs said compatibility with Windows is rarely an issue. AMD and Intel engineers work closely on an ongoing basis with their Microsoft counterparts not only as the operating system is being developed, but also as the chip makers lay out plans for future processors and architectures, they said.
Alfs said Microsoft builds its Windows OS on the x86 architecture, and the long beta testing cycle the software maker undertakes ensures close compatibility with Intel hardware designs. In addition, Intel gives Microsoft a long view of its product development plans. For example, Intel engineers already are sharing information with Microsoft about "Sandy Bridge," the chip architecture that will replace "Nehalem" sometime in 2010 and will offer such features as on-chip graphics technology and the AVX instruction set.
"These are chips and platforms that are not even on the market yet," Alfs said.
Lewis said in most cases, the hardware is given to software makers, who then ensure that their offerings are compatible. For example, Windows 7 will take advantage of AMD's RVI (Rapid Virtualization Indexing), which enables better management when hypervisors, a guest OS and applications are involved, she said.
"The hardware is presented to the software [makers], and then they put their magic into it," Lewis said.
Citrix Unveils New, Simplier Enterprise Solutions
Citrix this week unveiled a plethora of new products, applications and services aimed at helping solution providers, their customers and even consumers achieve greater control over their virtualized environments.
During a Webinar and press briefing from the vendor’s annual Synergy 2009 users conference in Las Vegas, vice president and chief marketing officer Wes Wasson detailed Citrix’s new and upgraded lineup of software and services, which he says are targeted toward a do-it-yourself model much like that of broadcast media and consumer applications such as Apple’s iTunes.
“We want to turn data centers into ‘delivery centers,’ and are modeling this vision on broadcast media,” Wasson told Webinar attendees.
“Broadcast media and consumer-focused companies like Apple offer a massively consistent service free of IT encumbrances—for the most part, they don’t care about what kind of TV you have, what content you want to watch or listen to, it’s 100 percent end-user driven,” Wasson says. That’s the experience Citrix hopes to make ubiquitous across the enterprise, he says, meaning that users can expect software, services and applications to be always-on with consistent quality of service and speed.
At the top of Wasson’s list of new products is Dazzle, an online application store for enterprise developers, designed along the iTunes model; Citrix Receiver, a new mobile device application that enables solution providers and end users to administer enterprise applications from anywhere; and a major upgrade to Citrix's Essentials virtualization management package for XenServer hypervisor and Microsoft's Hyper-V.
During a Webinar and press briefing from the vendor’s annual Synergy 2009 users conference in Las Vegas, vice president and chief marketing officer Wes Wasson detailed Citrix’s new and upgraded lineup of software and services, which he says are targeted toward a do-it-yourself model much like that of broadcast media and consumer applications such as Apple’s iTunes.
“We want to turn data centers into ‘delivery centers,’ and are modeling this vision on broadcast media,” Wasson told Webinar attendees.
“Broadcast media and consumer-focused companies like Apple offer a massively consistent service free of IT encumbrances—for the most part, they don’t care about what kind of TV you have, what content you want to watch or listen to, it’s 100 percent end-user driven,” Wasson says. That’s the experience Citrix hopes to make ubiquitous across the enterprise, he says, meaning that users can expect software, services and applications to be always-on with consistent quality of service and speed.
At the top of Wasson’s list of new products is Dazzle, an online application store for enterprise developers, designed along the iTunes model; Citrix Receiver, a new mobile device application that enables solution providers and end users to administer enterprise applications from anywhere; and a major upgrade to Citrix's Essentials virtualization management package for XenServer hypervisor and Microsoft's Hyper-V.
Monday, May 4, 2009
Windows Server 2008 R2 RC Includes Important Improvements to Hyper-V Implementation
Windows Server 2008 R2 Release Candidate code was made available on April 30 to MSDN and TechNet subscribers. There are many changes to the operating system. Here, I've highlighted some of the most important improvements in the implementation of Hyper-V. For example, running guest systems can be configured on the fly to add or remove virtual hard drives. I tested using the newly released Windows 7 Build 7100 along with the RC build of Windows Server 2008 R2. Changes in User Account Controls, PowerShell and AppLocker are all significant enough that IT managers should brush off their study skills and buckle down with the newest version of Microsoft's server operating system. Also look for changes in Group Policy, Active Directory, remote desktop services and deployment services. Even features that didn't change much in functionality usually have a new address; I spent a fair amount of time during my testing just poking through the user interface, looking for familiar landmarks that had been buried in new locations.
RingCube Takes On VMware, Citrix in Desktop Virtualization
RingCube is unveiling vDesk 2.0, the latest version of its desktop virtualization product. A key new offering within vDesk 2.0 is the Workspace Virtualization Engine, which is designed to make it easier for enterprises to manage, deploy and secure their desktop virtualization environments. It also is a key differentiator for RingCube in a competitive space that includes VMware and Citrix, RingCube officials say.
RingCube Technologies is rolling out the next generation of its vDesk desktop virtualization technology, including a new feature designed to improve the manageability and security around the offering.
RingCube’s vDesk 2.0, announced May 1, includes the company’s WVE (Workspace Virtualization Engine), which company officials say is a key differentiator in a highly competitive field that includes such companies as VMware and Citrix Systems.
It also comes the same week that Quest Software, at the Microsoft Management Summit in Las Vegas, announced it was integrating its Quest vWorkspace virtual desktop management offering with Microsoft System Center Virtual Machine Manager and Microsoft App-V (Application Virtualization) technology.
Doug Dooley, vice president of product management at RingCube, said the company is looking to separate itself from other vendors in the desktop virtualization space by coming out with solutions that don’t require a lot of upfront costs or require a lot of duplicate Windows licenses.
VDI (virtual desktop infrastructure) solutions require high upfront costs—sometimes in the millions of dollars—and they bring with them more storage and power and cooling expenses, Dooley said. By comparison, a vDesk solution for 2,500 users runs around $500,000, he said.
In addition, mobility is an issue with VDI, Dooley said.
An eWEEK Labs analyst says there's no need to rush into VDI.
RingCube’s vDesk offering is designed to enable enterprise users to put the technology on their work PCs or on unmanaged systems, such as their home computers. When they turn on vDesk, it gives them a personalized virtual workspace, complete with their own settings, files, applications and desktop, Dooley said. The company’s MobileSync technology then lets users synchronize their vDesk workspace between PCs, USB drives or other portable media, a network file share or VDI environments.
RingCube’s WVE in vDesk 2.0 offers what Dooley called a lightweight virtual desktop, with an isolated network stack and support for such applications as endpoint security, databases and PC management software, which require drivers and security services.
Among the components of WVE are vDeskNet, which enables virtual networking by separating and isolating network traffic from the host PC, and virtual user management, which gives the virtual workspace a unique set of user accounts separate from the host PC.
The Virtual Security Store offers a separate storage area within the virtual workspace for such items as certificates, and Virtual Windows Services offer improve application isolation from the host machine.
Other security and isolation controls in vDesk 2.0 come through virtual workspace encryption though integration with third-party software, as well as a virtual networking stack that isolates all network traffic inside the virtual workspace from the host system.
The goal is to give users an easier and more secure way to run a virtual desktop environment, Dooley said.
“This thing is not the hardest thing to get your arms around as far as deployment is concerned,” he said.
The vDesk solution also offers improved management enabling enterprises to create single workspace, then give employees their own version of that master copy. There is also a more streamlined log-in process.
Dooley said businesses are beginning to take a hard look at desktop virtualization solutions, driving in large part by the need to reduce operating and capital costs and to improve business continuity.
“It’s so early in the [desktop virtualization space],” he said. “We are where we were with server virtualization five years ago.”
Dooley said he expects interest in desktop virtualization to grow, and sees Microsoft’s upcoming introduction of Windows 7 as a driver to get enterprises thinking more about their desktop environments.
“I don’t think people are going to stay on the status quo forever,” he said.
RingCube’s vDesk 2.0 is available immediately, staring at $200 per user. RingCube also will be showcasing the new offering at the Citrix Synergy show May 5-6 in Las Vegas.
RingCube Technologies is rolling out the next generation of its vDesk desktop virtualization technology, including a new feature designed to improve the manageability and security around the offering.
RingCube’s vDesk 2.0, announced May 1, includes the company’s WVE (Workspace Virtualization Engine), which company officials say is a key differentiator in a highly competitive field that includes such companies as VMware and Citrix Systems.
It also comes the same week that Quest Software, at the Microsoft Management Summit in Las Vegas, announced it was integrating its Quest vWorkspace virtual desktop management offering with Microsoft System Center Virtual Machine Manager and Microsoft App-V (Application Virtualization) technology.
Doug Dooley, vice president of product management at RingCube, said the company is looking to separate itself from other vendors in the desktop virtualization space by coming out with solutions that don’t require a lot of upfront costs or require a lot of duplicate Windows licenses.
VDI (virtual desktop infrastructure) solutions require high upfront costs—sometimes in the millions of dollars—and they bring with them more storage and power and cooling expenses, Dooley said. By comparison, a vDesk solution for 2,500 users runs around $500,000, he said.
In addition, mobility is an issue with VDI, Dooley said.
An eWEEK Labs analyst says there's no need to rush into VDI.
RingCube’s vDesk offering is designed to enable enterprise users to put the technology on their work PCs or on unmanaged systems, such as their home computers. When they turn on vDesk, it gives them a personalized virtual workspace, complete with their own settings, files, applications and desktop, Dooley said. The company’s MobileSync technology then lets users synchronize their vDesk workspace between PCs, USB drives or other portable media, a network file share or VDI environments.
RingCube’s WVE in vDesk 2.0 offers what Dooley called a lightweight virtual desktop, with an isolated network stack and support for such applications as endpoint security, databases and PC management software, which require drivers and security services.
Among the components of WVE are vDeskNet, which enables virtual networking by separating and isolating network traffic from the host PC, and virtual user management, which gives the virtual workspace a unique set of user accounts separate from the host PC.
The Virtual Security Store offers a separate storage area within the virtual workspace for such items as certificates, and Virtual Windows Services offer improve application isolation from the host machine.
Other security and isolation controls in vDesk 2.0 come through virtual workspace encryption though integration with third-party software, as well as a virtual networking stack that isolates all network traffic inside the virtual workspace from the host system.
The goal is to give users an easier and more secure way to run a virtual desktop environment, Dooley said.
“This thing is not the hardest thing to get your arms around as far as deployment is concerned,” he said.
The vDesk solution also offers improved management enabling enterprises to create single workspace, then give employees their own version of that master copy. There is also a more streamlined log-in process.
Dooley said businesses are beginning to take a hard look at desktop virtualization solutions, driving in large part by the need to reduce operating and capital costs and to improve business continuity.
“It’s so early in the [desktop virtualization space],” he said. “We are where we were with server virtualization five years ago.”
Dooley said he expects interest in desktop virtualization to grow, and sees Microsoft’s upcoming introduction of Windows 7 as a driver to get enterprises thinking more about their desktop environments.
“I don’t think people are going to stay on the status quo forever,” he said.
RingCube’s vDesk 2.0 is available immediately, staring at $200 per user. RingCube also will be showcasing the new offering at the Citrix Synergy show May 5-6 in Las Vegas.
Windows Server 2008 R2 Boasts Big Virtualization Improvements
eWEEK Labs' first take on the 64-bit-only Windows Server 2008 R2 RC shows that the update takes major steps forward, especially in the area of virtualization. However, moving existing Windows Server 2008 systems to the newest version will not be easy.
eWEEK Labs recently took a first look at the release candidate of Windows Server 2008 R2--which was made available to MSDN and TechNet subscribers on April 30--and found that the update offers major improvements, especially in the area of virtualization.
But the first thing you might notice is that the 32-bit version of the operating system is gone; Windows Server 2008 R2 is only available as a 64-bit OS. This in itself isn't a big deal, as nearly all CPUs from the last three to four years are 64-bit based. Also, 32-bit applications can run on 64-bit Windows.
In addition, for enterprise IT, where virtualization and big (commodity) iron rule the day, Windows Server 2008 R2's 64-bit architecture makes sense because virtualization is constrained more by memory than by CPU. At organizations where file and print servers are still predominant, Windows Server 2008 R2 may be overkill.
The update's biggest draw is improvement to Hyper-V, Microsoft's virtualization platform.
For example, Live Migration is a big improvement over Quick Migration, which has a reputation of not being as nimble as its name implies.
Take a look at images of Windows Server 2008 R2.
While Quick Migration uses Windows Server clustering to maintain application availability when the physical host server goes down, Live Migration can transparently move running guest systems from one node to another inside a failover cluster without a dropping the network connection. (Failover clustering requires shared storage using either iSCSI or Fibre Channel SANs.)
Virtual machines also can now support hot plug in and hot removal of both virtual and physical storage without rebooting the physical host system, and Hyper-V can now offload some processing to the physical host, including TCP/IP operations.
AMD and Intel both make hardware that is specially designed to assist virtualization technologies, including those found in Hyper-V. The latest example of this is the Intel Xeon 5500, or "Nahelem," processors. I'll be looking at how current-generation AMD and Intel server systems help boost the performance and capacity of virtualization tools as I work through a series of hardware reviews in the coming months. I'm not allowed to talk about some of the other improvements made in Hyper-V just yet, but expect to see extensive testing of these features soon.
Another compelling feature is AppLocker, which is also featured in Windows 7 and replaces the operating systems' Software Restriction Policies feature. (See eWEEK Labs' first look at Windows 7 RC here.) At first glance, AppLocker appears to increase administrator control over how users can access and use executable files, scripts and Windows Installer files. With AppLocker, administrators define rules based on file attributes such as product name, file name and file version.
Tough migration
There's no doubt that Windows Server 2008 R2 offers major improvements, but getting existing Windows installations to this most current release may be a drag.
Migrating from a 32-bit version of Windows Server 2008 or 2003 basically requires a number of migration tools followed by installation of Windows Server 2008 R2.
Microsoft makes available a Solution Accelerator to help the migration along, but, in my experience, where Solution Accelerators go, complexity and planning are sure to follow. This almost certainly means that IT managers should plan on seeing Windows Sever 2008 R2 arrive on new equipment instead of attempting field upgrades of deployed production systems.
eWEEK Labs recently took a first look at the release candidate of Windows Server 2008 R2--which was made available to MSDN and TechNet subscribers on April 30--and found that the update offers major improvements, especially in the area of virtualization.
But the first thing you might notice is that the 32-bit version of the operating system is gone; Windows Server 2008 R2 is only available as a 64-bit OS. This in itself isn't a big deal, as nearly all CPUs from the last three to four years are 64-bit based. Also, 32-bit applications can run on 64-bit Windows.
In addition, for enterprise IT, where virtualization and big (commodity) iron rule the day, Windows Server 2008 R2's 64-bit architecture makes sense because virtualization is constrained more by memory than by CPU. At organizations where file and print servers are still predominant, Windows Server 2008 R2 may be overkill.
The update's biggest draw is improvement to Hyper-V, Microsoft's virtualization platform.
For example, Live Migration is a big improvement over Quick Migration, which has a reputation of not being as nimble as its name implies.
Take a look at images of Windows Server 2008 R2.
While Quick Migration uses Windows Server clustering to maintain application availability when the physical host server goes down, Live Migration can transparently move running guest systems from one node to another inside a failover cluster without a dropping the network connection. (Failover clustering requires shared storage using either iSCSI or Fibre Channel SANs.)
Virtual machines also can now support hot plug in and hot removal of both virtual and physical storage without rebooting the physical host system, and Hyper-V can now offload some processing to the physical host, including TCP/IP operations.
AMD and Intel both make hardware that is specially designed to assist virtualization technologies, including those found in Hyper-V. The latest example of this is the Intel Xeon 5500, or "Nahelem," processors. I'll be looking at how current-generation AMD and Intel server systems help boost the performance and capacity of virtualization tools as I work through a series of hardware reviews in the coming months. I'm not allowed to talk about some of the other improvements made in Hyper-V just yet, but expect to see extensive testing of these features soon.
Another compelling feature is AppLocker, which is also featured in Windows 7 and replaces the operating systems' Software Restriction Policies feature. (See eWEEK Labs' first look at Windows 7 RC here.) At first glance, AppLocker appears to increase administrator control over how users can access and use executable files, scripts and Windows Installer files. With AppLocker, administrators define rules based on file attributes such as product name, file name and file version.
Tough migration
There's no doubt that Windows Server 2008 R2 offers major improvements, but getting existing Windows installations to this most current release may be a drag.
Migrating from a 32-bit version of Windows Server 2008 or 2003 basically requires a number of migration tools followed by installation of Windows Server 2008 R2.
Microsoft makes available a Solution Accelerator to help the migration along, but, in my experience, where Solution Accelerators go, complexity and planning are sure to follow. This almost certainly means that IT managers should plan on seeing Windows Sever 2008 R2 arrive on new equipment instead of attempting field upgrades of deployed production systems.
Stratus Enhances Disaster Recovery in Avance Software
In its Avance 1.5 offering, Stratus is adding upgraded disaster recovery and business continuity features. The high-availability virtualization software lets SMBs connect two x86 servers and run them in a synchronized fashion. With Avance 1.5, businesses can separate the two connected servers by up to 3 miles, protecting the data against localized disasters. Avance 1.5 also features easier management capabilities and iSCSI SAN support for Dell EquilLogic PS5000E storage offerings.
Stratus Technologies is adding business continuity, disaster recovery and administrative enhancements to its Avance high-availability software.
Stratus on May 4 is rolling out the latest version of Avance, which the company introduced a year ago to give SMBs a highly available virtualized environment platform for their x86 systems.
The software offers several advantages over cluster environments, including greater uptime, better protection against data loss and less complexity, said Lee Kaminski, Stratus’ product manager for Avance.
It also eliminates the need to have anyone on-site.
“We have the ability to completely manage Avance remotely [on both virtual and physical machines],” Kaminski said.
The software includes an embedded XenServer hypervisor and runs Windows and Linux operating systems. The software runs over two systems that are connected by an Ethernet link. Those nodes are synchronized and mirrored, and Avance manages the mirroring of business processes between them. If one system goes down, everything automatically fails over to the other system with no interruption.
“We handle that in an automated way,” he said.
Once the first node is repaired, it’s put back online and everything between the two nodes are synced up again.
In Avance 1.5, Stratus is offering its Split-site feature, which lets the paired servers to be separated by as much as 3 miles, which protects against data loss if there is some sort of disaster at one site, such as a fire, lightning strike, structural damage, hardware problems or theft.
“This is the first step toward disaster recovery,” Kaminski said. “We’re starting to move the boxes apart.”
In addition, the new release also offers easier remote management features, such as a single-node installation capability, and iSCSI SAN (storage-area network) support for Dell's EquilLogic PS5000E storage offering. Dell is a reseller of Avance, he said.
Avance 1.5 also lets users boot a Windows virtual machine from a CD or virtual CD, and offers host-level RAID 0, 1 and 5 support and improved documentation.
Avance 1.5 is available now, starting at $5,000.
Stratus Technologies is adding business continuity, disaster recovery and administrative enhancements to its Avance high-availability software.
Stratus on May 4 is rolling out the latest version of Avance, which the company introduced a year ago to give SMBs a highly available virtualized environment platform for their x86 systems.
The software offers several advantages over cluster environments, including greater uptime, better protection against data loss and less complexity, said Lee Kaminski, Stratus’ product manager for Avance.
It also eliminates the need to have anyone on-site.
“We have the ability to completely manage Avance remotely [on both virtual and physical machines],” Kaminski said.
The software includes an embedded XenServer hypervisor and runs Windows and Linux operating systems. The software runs over two systems that are connected by an Ethernet link. Those nodes are synchronized and mirrored, and Avance manages the mirroring of business processes between them. If one system goes down, everything automatically fails over to the other system with no interruption.
“We handle that in an automated way,” he said.
Once the first node is repaired, it’s put back online and everything between the two nodes are synced up again.
In Avance 1.5, Stratus is offering its Split-site feature, which lets the paired servers to be separated by as much as 3 miles, which protects against data loss if there is some sort of disaster at one site, such as a fire, lightning strike, structural damage, hardware problems or theft.
“This is the first step toward disaster recovery,” Kaminski said. “We’re starting to move the boxes apart.”
In addition, the new release also offers easier remote management features, such as a single-node installation capability, and iSCSI SAN (storage-area network) support for Dell's EquilLogic PS5000E storage offering. Dell is a reseller of Avance, he said.
Avance 1.5 also lets users boot a Windows virtual machine from a CD or virtual CD, and offers host-level RAID 0, 1 and 5 support and improved documentation.
Avance 1.5 is available now, starting at $5,000.
Hitachi Uses Aptare to Bring Its Storage Portfolio Up to Date
New software options will provide HDS' customers with a more extensive management view beyond the storage system, so they can better monitor and utilize current IT assets tied to business applications, virtualized servers and the data center, according to the company.
Hitachi Data Systems has partnered with storage optimization software provider Aptare in an initiative that modernizes its software portfolio.
HDS said April 28 that it has expanded its management software portfolio with three new offerings: the Hitachi Virtual Server Reporter, supplied by Aptare; Hitachi IT Operations Analyzer, and the Hitachi Storage Command Portal.
The new software options will provide HDS customers with a more extensive management view beyond the storage system, so they can better monitor and utilize current IT assets tied to business applications, virtualized servers and the data center, company officials said.
Aptare, based in Campbell, Calif., provides Web-based storage reporting and management software. It even has an application that allows a user to check a system's storage condition via an iPhone.
The Virtual Server Reporter provides an end-to-end view of virtualized VMware servers and their respective storage usage. The new software, designed for enterprise customers, provides storage reporting management in mixed-environemnt data centers down to control of individual virtual machines, company officials said.
By integrating the reporting functions of virtual servers, customers can more effectively manage their storage and backups, allowing them to obtain better utilization of storage assets and decrease costs, according to the company.
IT Operations Analyzer, aimed at midmarket customers, handles data center management by providing integrated monitoring of data center servers and IP networks, along with fibre channel SANs (storage area networks) and LANs.
The new reporting package features automated root cause analysis, network visualization, agentless architecture to support simple deployment, and a unified intuitive, Web-based interface.
IT Operations Analyzer is designed to streamline IT operations and improve customer service levels. Specialized training is not necessary, company officials said.
"Hitachi IT Operations Analyzer helps manage fast remediation for network issues and outages," said Mary Johnston Turner, an analyst at IT researcher IDC.
The Hitachi Storage Command Portal unifies storage reporting and provides a business-application view of the Hitachi storage environment.
IT Operations Analyzer is available via a 30-day free trial download here.
Hitachi Data Systems has partnered with storage optimization software provider Aptare in an initiative that modernizes its software portfolio.
HDS said April 28 that it has expanded its management software portfolio with three new offerings: the Hitachi Virtual Server Reporter, supplied by Aptare; Hitachi IT Operations Analyzer, and the Hitachi Storage Command Portal.
The new software options will provide HDS customers with a more extensive management view beyond the storage system, so they can better monitor and utilize current IT assets tied to business applications, virtualized servers and the data center, company officials said.
Aptare, based in Campbell, Calif., provides Web-based storage reporting and management software. It even has an application that allows a user to check a system's storage condition via an iPhone.
The Virtual Server Reporter provides an end-to-end view of virtualized VMware servers and their respective storage usage. The new software, designed for enterprise customers, provides storage reporting management in mixed-environemnt data centers down to control of individual virtual machines, company officials said.
By integrating the reporting functions of virtual servers, customers can more effectively manage their storage and backups, allowing them to obtain better utilization of storage assets and decrease costs, according to the company.
IT Operations Analyzer, aimed at midmarket customers, handles data center management by providing integrated monitoring of data center servers and IP networks, along with fibre channel SANs (storage area networks) and LANs.
The new reporting package features automated root cause analysis, network visualization, agentless architecture to support simple deployment, and a unified intuitive, Web-based interface.
IT Operations Analyzer is designed to streamline IT operations and improve customer service levels. Specialized training is not necessary, company officials said.
"Hitachi IT Operations Analyzer helps manage fast remediation for network issues and outages," said Mary Johnston Turner, an analyst at IT researcher IDC.
The Hitachi Storage Command Portal unifies storage reporting and provides a business-application view of the Hitachi storage environment.
IT Operations Analyzer is available via a 30-day free trial download here.
Cassatt Preparing to Shut Its Doors, Report Says
Cassatt, the infrastructure management software company started by BEA Systems founder Bill Coleman, is running out of money, the victim of the global recession and moves by top-tier vendors such as IBM, HP, Dell and Sun in the cloud computer and converged data center arenas. Coleman says he has been shopping Cassatt around with little success and that the company's products may end up being sold in bankruptcy.
About six years ago, Silicon Valley mainstay Bill Coleman—at one time a Sun Microsystems executive, and later a founding member of BEA Systems—started Cassatt.
The company was founded to build software that helps enterprises manage huge and distributed infrastructure environments, and could have played a major role given the rise of cloud computing.
However, according to a published report, Cassatt is on its way out, the victim of the global recession and competition from larger players.
In an interview with Forbes.com April 27, Coleman said Cassatt is nearing the end of its existence, and that he has been looking for a buyer for several months without much success. He didn't name any company he had had talks with, although the Forbes report mentioned Google and Amazon.com having backed off quickly after initial contact. Coleman also said if a buyer isn't found, the company's assets could be sold in a bankruptcy proceeding.
Cassatt apparently has burned through more than $100 million over the past six years, and while some enterprises have shown an interest in the Cassatt Active Response software, few have moved beyond the testing phase.
Click here to read about the enterprise trend toward private clouds.
"What frustrates me is my own naivete," Coleman told Forbes. "I thought I could give companies something radical that had a proven return on investment, and they would be willing to change all their companies' computer policies and procedures to get that. Right now it's hard to get people to get beyond proof-of-concept tests or a data center energy analysis."
Cassatt's impending demise comes at a time when cloud computing and converged data centers are becoming important trends in the industry. Top-tier vendors—including IBM, Dell, Hewlett-Packard, Cisco Systems, Sun Microsystems, Novell and VMware—have unveiled strategies designed to integrate server, storage, networking and software into a single data center entity, fueled in large part by virtualization.
In the same vein, those companies and others, such as Amazon.com and Google, are pushing compute clouds, both internal and public, as a way for businesses to increase their agility and flexibility while reducing operating and capital costs.
Wrapped around all this are management software initiatives from a host of large and smaller vendors designed to handle the increased complexity that these environments will create, similar to what Cassatt was trying to do.
About six years ago, Silicon Valley mainstay Bill Coleman—at one time a Sun Microsystems executive, and later a founding member of BEA Systems—started Cassatt.
The company was founded to build software that helps enterprises manage huge and distributed infrastructure environments, and could have played a major role given the rise of cloud computing.
However, according to a published report, Cassatt is on its way out, the victim of the global recession and competition from larger players.
In an interview with Forbes.com April 27, Coleman said Cassatt is nearing the end of its existence, and that he has been looking for a buyer for several months without much success. He didn't name any company he had had talks with, although the Forbes report mentioned Google and Amazon.com having backed off quickly after initial contact. Coleman also said if a buyer isn't found, the company's assets could be sold in a bankruptcy proceeding.
Cassatt apparently has burned through more than $100 million over the past six years, and while some enterprises have shown an interest in the Cassatt Active Response software, few have moved beyond the testing phase.
Click here to read about the enterprise trend toward private clouds.
"What frustrates me is my own naivete," Coleman told Forbes. "I thought I could give companies something radical that had a proven return on investment, and they would be willing to change all their companies' computer policies and procedures to get that. Right now it's hard to get people to get beyond proof-of-concept tests or a data center energy analysis."
Cassatt's impending demise comes at a time when cloud computing and converged data centers are becoming important trends in the industry. Top-tier vendors—including IBM, Dell, Hewlett-Packard, Cisco Systems, Sun Microsystems, Novell and VMware—have unveiled strategies designed to integrate server, storage, networking and software into a single data center entity, fueled in large part by virtualization.
In the same vein, those companies and others, such as Amazon.com and Google, are pushing compute clouds, both internal and public, as a way for businesses to increase their agility and flexibility while reducing operating and capital costs.
Wrapped around all this are management software initiatives from a host of large and smaller vendors designed to handle the increased complexity that these environments will create, similar to what Cassatt was trying to do.
Trend Micro Acquires Third Brigade for Data Center Security
Trend Micro has signed an agreement to buy Third Brigade to extend its data center protection strategy with virtualization and host intrusion prevention technologies. The deal is expected to close in the second quarter of 2009.
Trend Micro is extending its data center protection strategy with a planned purchase of server and application security vendor Third Brigade.
According to Trend Micro, the company is buying the business to accelerate its dynamic datacenter security strategy and to provide customers with access to critical security and compliance software and vulnerability response services. The two companies have had an OEM agreement in place for 18 months, with Trend Micro integrating Third Brigade’s intrusion prevention technology into Trend Micro OfficeScan.
“Not only is it the intrusion defense, intrusion detection/prevention capabilities, but also the Web application firewall, application control and reporting and inspection capabilities that Third Brigade brings to Trend Micro that we’re very excited about in dealing with the dynamic data center and the security challenge of those data centers,” said Steve Quane, president of the North America Business Unit and General Manager of SMB at Trend Micro. “We have a lot of enterprise data center customers coming to us asking us to take a leadership position in how both applications are protected and controlled, [and] also on how IDS and IPS technologies and firewalling all work in a virtual environment.”
That’s why the company is excited to extend its Trend Micro ServerProtect product lines with Third Brigade’s technology and leverage Third Brigade’s expertise in securing the data center, he explained during a conference call with analysts and media.
Paul Roberts, an analyst with The 451 Group, said the acquisition bolsters Trend Micro’s capabilities in some areas - host intrusion prevention for one, making it more competitive with McAfee and Symantec. Both those companies made acquisitions in the IPS space a few years back, he noted. More important and intriguing is Trend Micro's push into the data center, virtualization and cloud security space, Roberts said.
“Obviously, this is a vision that’s similar to the one (Symantec) articulated when it acquired Veritas, but stays focused on the core “threat protection” problem rather than recasting Trend as a security + storage vendor,” he opined. “What’s next? They’ll need to do more to develop their data protection story. That could presage some kind of investment in the encryption space. They could also double-down with some kind of database monitoring technology, as well as Web application firewall capabilities to address PCI and the major avenue to all that juicy data in your databases.”
Trend Micro officials said they will continue to develop along the lines of Third Brigade’s existing product road map and will continue to offer its stand-alone products for the near term as well as the companies integrate elements of their portfolios.
The acquisition is subject to certain approvals, and is expected to close in the second quarter of 2009.
Trend Micro is extending its data center protection strategy with a planned purchase of server and application security vendor Third Brigade.
According to Trend Micro, the company is buying the business to accelerate its dynamic datacenter security strategy and to provide customers with access to critical security and compliance software and vulnerability response services. The two companies have had an OEM agreement in place for 18 months, with Trend Micro integrating Third Brigade’s intrusion prevention technology into Trend Micro OfficeScan.
“Not only is it the intrusion defense, intrusion detection/prevention capabilities, but also the Web application firewall, application control and reporting and inspection capabilities that Third Brigade brings to Trend Micro that we’re very excited about in dealing with the dynamic data center and the security challenge of those data centers,” said Steve Quane, president of the North America Business Unit and General Manager of SMB at Trend Micro. “We have a lot of enterprise data center customers coming to us asking us to take a leadership position in how both applications are protected and controlled, [and] also on how IDS and IPS technologies and firewalling all work in a virtual environment.”
That’s why the company is excited to extend its Trend Micro ServerProtect product lines with Third Brigade’s technology and leverage Third Brigade’s expertise in securing the data center, he explained during a conference call with analysts and media.
Paul Roberts, an analyst with The 451 Group, said the acquisition bolsters Trend Micro’s capabilities in some areas - host intrusion prevention for one, making it more competitive with McAfee and Symantec. Both those companies made acquisitions in the IPS space a few years back, he noted. More important and intriguing is Trend Micro's push into the data center, virtualization and cloud security space, Roberts said.
“Obviously, this is a vision that’s similar to the one (Symantec) articulated when it acquired Veritas, but stays focused on the core “threat protection” problem rather than recasting Trend as a security + storage vendor,” he opined. “What’s next? They’ll need to do more to develop their data protection story. That could presage some kind of investment in the encryption space. They could also double-down with some kind of database monitoring technology, as well as Web application firewall capabilities to address PCI and the major avenue to all that juicy data in your databases.”
Trend Micro officials said they will continue to develop along the lines of Third Brigade’s existing product road map and will continue to offer its stand-alone products for the near term as well as the companies integrate elements of their portfolios.
The acquisition is subject to certain approvals, and is expected to close in the second quarter of 2009.
Opalis Brings Cloud, Virtualization Automation to Microsoft System Center
Opalis is readying a set of IT process automation offerings designed to make it easier for users of Microsoft’s System Center management suite to adopt such data center technologies as virtualization, cloud computing and power management. The goal is to automate many of the tasks, policies and best practices for IT administrators using Microsoft’s management suite. The offerings also will give Microsoft greater capabilities as it looks to take on VMware in the virtualization space, Opalis officials say.
Opalis Software is bringing greater IT process automation capabilities to Microsoft’s suite of system management software.
At the Microsoft Management Summit in Las Vegas April 28, Opalis officials announced that their offerings for Microsoft System Center touch on such key data center areas as virtualization, cloud computing and power management capabilities.
The offerings will give System Center users greater ability to orchestrate the technologies within Microsoft’s software suite and automate the best practices for the management of their infrastructures.
Yale Tankus, senior vice president of marketing and business development for Opalis, said customers are demanding greater orchestration capabilities in both their physical and virtual environments, and that Opalis’ offerings within System Center will enable them to more quickly ramp up their virtualization and cloud computing efforts.
HP integrates server management into Microsoft suite.
For Microsoft, it gives the company some more capabilities in its ongoing attempt to gain leverage over VMware in the highly competitive virtualization space, Tankus said in an e-mail. For example, virtualization capabilities Opalis is bringing to Microsoft’s software suite include virtual lifecycle management, including self-serve provisioning, backup, restore and power management. Opalis’ solution—Virtual Service Management—will sit behind Microsoft Virtual Machine Manager, he said.
“So Microsoft is no longer behind VMware in this space,” said Tankus, adding that Opalis is demonstrating the prototype at the Microsoft show.
In the area of cloud computing, Opalis enables System Center users to move resources between private and public cloud environments, and to scale capacity up or down depending on events. They also can automatically failover to cloud resources, which ensure no interruption to services.
The Opalis offerings also integrate with the System Center Operations Manager console and provide users with a rich set of ITIL (information technology infrastructure libraries) policies to find and fix problems. Throughout the process of dealing with an incident, the System Center Operations Manager alert is updated, which gives users greater visibility into the situation and how to deal with it.
The Opalis offerings for System Center will be generally available in the third quarter.
Opalis Software is bringing greater IT process automation capabilities to Microsoft’s suite of system management software.
At the Microsoft Management Summit in Las Vegas April 28, Opalis officials announced that their offerings for Microsoft System Center touch on such key data center areas as virtualization, cloud computing and power management capabilities.
The offerings will give System Center users greater ability to orchestrate the technologies within Microsoft’s software suite and automate the best practices for the management of their infrastructures.
Yale Tankus, senior vice president of marketing and business development for Opalis, said customers are demanding greater orchestration capabilities in both their physical and virtual environments, and that Opalis’ offerings within System Center will enable them to more quickly ramp up their virtualization and cloud computing efforts.
HP integrates server management into Microsoft suite.
For Microsoft, it gives the company some more capabilities in its ongoing attempt to gain leverage over VMware in the highly competitive virtualization space, Tankus said in an e-mail. For example, virtualization capabilities Opalis is bringing to Microsoft’s software suite include virtual lifecycle management, including self-serve provisioning, backup, restore and power management. Opalis’ solution—Virtual Service Management—will sit behind Microsoft Virtual Machine Manager, he said.
“So Microsoft is no longer behind VMware in this space,” said Tankus, adding that Opalis is demonstrating the prototype at the Microsoft show.
In the area of cloud computing, Opalis enables System Center users to move resources between private and public cloud environments, and to scale capacity up or down depending on events. They also can automatically failover to cloud resources, which ensure no interruption to services.
The Opalis offerings also integrate with the System Center Operations Manager console and provide users with a rich set of ITIL (information technology infrastructure libraries) policies to find and fix problems. Throughout the process of dealing with an incident, the System Center Operations Manager alert is updated, which gives users greater visibility into the situation and how to deal with it.
The Opalis offerings for System Center will be generally available in the third quarter.
IBM, Brocade Take Aim at Cisco
IBM for years has been reselling networking products from Brocade, but a new deal expands that partnership into the Internet networking space, an area traditionally dominated by Cisco Systems. The move comes weeks after Cisco announced its UCS data center initiative, which includes Cisco getting into the hardware business, and IBM's new deal with Brocade is seen as a way for IBM to bolster its own integrated data center solutions. Other hardware makers, including HP and Dell, may look for similar deals with smaller networking companies if the IBM-Brocade partnership is successful.
In a move seen to be targeting Cisco Systems, Brocade Communications Systems and IBM announced April 28 that they are expanding their partnership to include Ethernet switching and routing products.
IBM and Brocade said IBM will rebrand and resell Brocade's family of enterprise IP networking products, creating a strong presence in an area that has been led for years by Cisco.
The deal comes just weeks after Cisco announced it was pushing deeper into the data center with its Unified Computing System initiative, a strategy designed to create a more integrated data center solution that includes hardware, networking, storage and software aspects. Some of these products come from Cisco, while others come from partners such as VMware, EMC and Intel.
A key part of the UCS strategy was Cisco announcing that it would make its own blade servers powered by Intel's new Xeon 5500 series chips.
Such rivals as Hewlett-Packard and Sun Microsystems also have made aggressive pushes into the space, and Oracle could become a factor as well if it goes through with its planned $7.4 billion acquisition of Sun.
"This move follows many other data center consolidation stories we've already had this year (Oracle/Sun, Cisco UCS, etc.) and is being driven by the evolution to Anywhere IT," Zeus Kerravala, an analyst with Yankee Group, said in an e-mail. "It takes a relatively small vendor—Brocade—and gives them a huge distribution channel with IBM. The fact that IBM will be putting its own label on the product makes this much more than a typical reseller relationship."
It also puts pressure on Cisco, Kerravala said.
"OEM relationships are common in storage networking but not in IP networking," he said. "If this move is successful, it could open the door for other server vendors (Dell, Oracle/Sun, etc.) to OEM other smaller network vendors, further disrupting the market that Cisco has had a lock on for years."
IBM and other OEMs, including HP, have been using Brocade's networking equipment for years—and also have been partnering with Cisco. Among the Brocade products IBM already sells are the multiprotocol DCX Backbone SAN (storage area network) offering and Fibre Channel directors, as well as stand-alone and embedded switches, host bus adapters, and related software.
However, the new deal expands on that. Brocade in July 2008 announced it was buying Foundry Networks for $3 billion in a deal that gave it products for building Internet-based networks and made it a stronger competitor to Cisco. The Foundry deal closed in December 2008. Now IBM will resell IP networking products Brocade acquired from Foundry, including the NetIron and FastIron Ethernet routers and switches. Those IBM-branded products are expected to be launched in May.
Other Brocade products will be added to IBM's list over time, Brocade officials said in announcing the deal. IBM and Brocade also will work together on sales, marketing, training and support programs around the products.
Brocade officials say such OEM deals will play a key role in the company's future.
"This agreement with IBM underscores Brocade's long-term commitment to its OEM customers, a strategy we believe delivers the full promise of next-generation enterprise networking solutions in a pragmatic, nonproprietary way to protect customers' IT investments," Brocade CEO Mike Klayko said in a statement.
In a move seen to be targeting Cisco Systems, Brocade Communications Systems and IBM announced April 28 that they are expanding their partnership to include Ethernet switching and routing products.
IBM and Brocade said IBM will rebrand and resell Brocade's family of enterprise IP networking products, creating a strong presence in an area that has been led for years by Cisco.
The deal comes just weeks after Cisco announced it was pushing deeper into the data center with its Unified Computing System initiative, a strategy designed to create a more integrated data center solution that includes hardware, networking, storage and software aspects. Some of these products come from Cisco, while others come from partners such as VMware, EMC and Intel.
A key part of the UCS strategy was Cisco announcing that it would make its own blade servers powered by Intel's new Xeon 5500 series chips.
Such rivals as Hewlett-Packard and Sun Microsystems also have made aggressive pushes into the space, and Oracle could become a factor as well if it goes through with its planned $7.4 billion acquisition of Sun.
"This move follows many other data center consolidation stories we've already had this year (Oracle/Sun, Cisco UCS, etc.) and is being driven by the evolution to Anywhere IT," Zeus Kerravala, an analyst with Yankee Group, said in an e-mail. "It takes a relatively small vendor—Brocade—and gives them a huge distribution channel with IBM. The fact that IBM will be putting its own label on the product makes this much more than a typical reseller relationship."
It also puts pressure on Cisco, Kerravala said.
"OEM relationships are common in storage networking but not in IP networking," he said. "If this move is successful, it could open the door for other server vendors (Dell, Oracle/Sun, etc.) to OEM other smaller network vendors, further disrupting the market that Cisco has had a lock on for years."
IBM and other OEMs, including HP, have been using Brocade's networking equipment for years—and also have been partnering with Cisco. Among the Brocade products IBM already sells are the multiprotocol DCX Backbone SAN (storage area network) offering and Fibre Channel directors, as well as stand-alone and embedded switches, host bus adapters, and related software.
However, the new deal expands on that. Brocade in July 2008 announced it was buying Foundry Networks for $3 billion in a deal that gave it products for building Internet-based networks and made it a stronger competitor to Cisco. The Foundry deal closed in December 2008. Now IBM will resell IP networking products Brocade acquired from Foundry, including the NetIron and FastIron Ethernet routers and switches. Those IBM-branded products are expected to be launched in May.
Other Brocade products will be added to IBM's list over time, Brocade officials said in announcing the deal. IBM and Brocade also will work together on sales, marketing, training and support programs around the products.
Brocade officials say such OEM deals will play a key role in the company's future.
"This agreement with IBM underscores Brocade's long-term commitment to its OEM customers, a strategy we believe delivers the full promise of next-generation enterprise networking solutions in a pragmatic, nonproprietary way to protect customers' IT investments," Brocade CEO Mike Klayko said in a statement.
HP Integrates Server Management with Microsoft Suite
HP Integrates Server Management with Microsoft Suite
HP is integrating its Insight Control systems management software into Microsoft's System Center suite, a move HP officials say will bring simplicity to an increasingly complex data center environment. The ICE-SC move will enable HP ProLiant and BladeSystem users who have opted for Microsoft's management suite to take better advantage of the myriad hardware and software capabilities HP is offering through its Adaptive Infrastructure data center initiative. It also brings greater technical support, a key issue for HP customers.
Hewlett-Packard is integrating its Insight Control systems management software suite into Microsoft's System Center offering, the latest move by the systems vendor in its integrated data center initiative.
HP officials said the move to integrate the server management features of HP ProLiant and BladeSystem servers into Microsoft's consoles will give IT administrators greater visibility into and management of their data center environments. HP announced the integration April 28.
"The focus here is that we're reaching out to customers who have chosen Microsoft System Center as their overarching management system," said Jeff Carlat, director of marketing for HP's Infrastructure Software and BladeSystem business.
HP's Adaptive Infrastructure initiative is designed to improve performance and flexibility in the data center while reducing complexity and costs. The company has kept up a steady drumbeat of announcements around the strategy, most recently introducing its BladeSystem Matrix, an all-in-one package that combines server, storage, networking and software, linked through HP's Virtual Connect technology.
Competitors such as IBM, Dell, Cisco Systems and Sun Microsystems have unveiled similar initiatives around the idea of converged data centers.
Take a look at the building blocks for Cisco's UCS strategy.
Carlat said the integration move with Microsoft is another step in simplifying the data center environment, where such technologies as virtualization and advanced networking capabilities are making things more unwieldy for IT administrators.
"We are seeing a high level of complexity popping up in the data center," he said.
Integrating the HP management capabilities into Microsoft's System Center—in what HP is calling its Insight Control suite for Microsoft System Center, or ICE-SC—will let ProLiant and BladeSystem users who have opted for Microsoft's software suite take advantage of the capabilities HP has to offer, Carlat said.
Until now, HP was like most of its competitors in offering basic integration into Microsoft System Center, Carlat said. However, the deeper integration of Insight Control means IT administrators can make better use of such HP technologies as the myriad sensors in the new ProLiant G6 servers—released in March in conjunction with Intel's rollout of its new Xeon 5500 series chips. Through the System Center console, users can monitor the information coming in from the sensors.
HP is integrating its Insight Control systems management software into Microsoft's System Center suite, a move HP officials say will bring simplicity to an increasingly complex data center environment. The ICE-SC move will enable HP ProLiant and BladeSystem users who have opted for Microsoft's management suite to take better advantage of the myriad hardware and software capabilities HP is offering through its Adaptive Infrastructure data center initiative. It also brings greater technical support, a key issue for HP customers.
Hewlett-Packard is integrating its Insight Control systems management software suite into Microsoft's System Center offering, the latest move by the systems vendor in its integrated data center initiative.
HP officials said the move to integrate the server management features of HP ProLiant and BladeSystem servers into Microsoft's consoles will give IT administrators greater visibility into and management of their data center environments. HP announced the integration April 28.
"The focus here is that we're reaching out to customers who have chosen Microsoft System Center as their overarching management system," said Jeff Carlat, director of marketing for HP's Infrastructure Software and BladeSystem business.
HP's Adaptive Infrastructure initiative is designed to improve performance and flexibility in the data center while reducing complexity and costs. The company has kept up a steady drumbeat of announcements around the strategy, most recently introducing its BladeSystem Matrix, an all-in-one package that combines server, storage, networking and software, linked through HP's Virtual Connect technology.
Competitors such as IBM, Dell, Cisco Systems and Sun Microsystems have unveiled similar initiatives around the idea of converged data centers.
Take a look at the building blocks for Cisco's UCS strategy.
Carlat said the integration move with Microsoft is another step in simplifying the data center environment, where such technologies as virtualization and advanced networking capabilities are making things more unwieldy for IT administrators.
"We are seeing a high level of complexity popping up in the data center," he said.
Integrating the HP management capabilities into Microsoft's System Center—in what HP is calling its Insight Control suite for Microsoft System Center, or ICE-SC—will let ProLiant and BladeSystem users who have opted for Microsoft's software suite take advantage of the capabilities HP has to offer, Carlat said.
Until now, HP was like most of its competitors in offering basic integration into Microsoft System Center, Carlat said. However, the deeper integration of Insight Control means IT administrators can make better use of such HP technologies as the myriad sensors in the new ProLiant G6 servers—released in March in conjunction with Intel's rollout of its new Xeon 5500 series chips. Through the System Center console, users can monitor the information coming in from the sensors.