Thursday, July 23, 2009

IBM Expands Relationships with Cisco, Juniper and Brocade

IBM is building on its partnerships with networking vendors Cisco Systems, Juniper Networks and Brocade Communications Systems in a push to advance its vision of a more integrated data center environment. The partnerships with Cisco, Juniper and Brocade range from OEM relationships to reseller deals. The announcement also is an indication of how IBM plans to differentiate itself from Cisco and Hewlett-Packard in a converged data center, with IBM relying more on offering customers flexibility and strong management software.


IBM is expanding its partnerships with networking vendors Cisco Systems, Juniper Networks and Brocade Communications Systems in a move that should increase networking options for customers.

The enhanced partnerships, which include OEM and reseller agreements, are part of a larger strategy called the Data Center Networking initiative that was kicked off about two years ago, as IBM saw the need to reintegrate servers, storage devices and networking technology within the data center.

The deals, announced July 22, also are an indication of how IBM is going to differentiate itself from rivals such as Cisco and Hewlett-Packard in the push to offer more converged data center solutions.

With Brocade, IBM is offering its first FCOE (Fibre Channel over Ethernet) products in the form of the IBM Converged Switch B32 and 10 Gigabit Ethernet Converged Network Adapter for its Series x x86 servers. Those devices will be manufactured by Brocade, an expansion of the OEM relationship between the two companies for Fibre Channel and Ethernet offerings. The products are available immediately.Resource Library:




FCOE also is a factor in IBM's growing relationship with Cisco. Through the new deal, IBM's Systems and Technology Group sellers and partners will be able to resell Cisco's Nexus 5000 Series switches, which support 10G Ethernet, Fibre Channel and FCOE. These products will be available through IBM and its resellers starting in September.

For 25 things you may not know about IBM, please click here.

IBM also is entering into an OEM agreement with Juniper, with which IBM has had a reseller agreement. Under the new deal, IBM will rebrand and resell certain Juniper EX and MX switches and routers.

Jim Comfort, vice president of enterprise initiatives for IBM, said the new and enhanced deals will give IBM customers greater choice and flexibility as they look to update their data centers to handle the expected growth in traffic due to Web 2.0 technologies, the rise of cloud computing and other technological trends.

IBM will offer these networking devices with its own server and storage products, and will differentiate itself with its management capabilities through its Tivoli and Director software suites.

FCOE is a key standard that is emerging as the trend toward more converged data centers continues, Comfort said in an interview. IBM envisions a scenario of tightly integrated server, storage and networking devices that IT administrators handle through "very powerful management [software]," he said.

The variety of networking and other products within these integrated data center "pods" is also a key differentiator for IBM in comparison with what rivals are doing, Comfort said. Both Cisco and HP have rolled out all-in-one data center offerings that include servers, storage, networking and management software in a single package.

Cisco kicked off its UCS (Unified Computing System) strategy in March, a move that signaled a more expanded role in the data center. HP soon followed with its HP Matrix all-in-one offering.

Having options is important to customers, Mike Banic, vice president of product marketing for Juniper's Ethernet Platforms Business Group, said in an interview.

"Juniper always uses standard [technology]," Banic said. "That ability to offer choice and flexibility in conjunction with IBM is important."

Juniper's products are designed to increase networking capabilities while driving down costs in the data center, he said, adding that the company's offerings can drive down capital expenditures by as much as 68 percent, power and cooling costs by 43 percent each, and space used by 34 percent.

IBM's Comfort said helping businesses decrease operating costs also was key to the move toward a more integrated data center environment.

AMD Ships 500 Millionth Processor

AMD, coming out a somewhat disappointing second quarter, announced the shipment of the 500 millionth x86 processor during the company’s 40 years in operation. To celebrate, AMD is running a contest in which customers can win one of four HP Pavilion notebooks. The announcement comes just days after AMD reported a $330 million loss, and just over a week after rival Intel posted strong second-quarter numbers.


Officials with Advanced Micro Devices are celebrating the shipment of the 500 millionth x86 processor.

The announcement is part of AMD’s ongoing touting of the company’s 40th year of existence.

AMD is noting the milestone by giving customers the chance to win one of four Pavilion dv2z ultra-thin notebooks from Hewlett-Packard. To win, customers need to follow AMD on Twitter (@AMD_Unprocessed), where a new question will be posted every other Monday beginning July 27. They can then send the answer through a direct message to AMD's twitter handle. The eligible respondents will be entered into a drawing for the HP notebooks.

More details can be found here.Resource Library:




The announcement of the 500 millionth chip shipped comes days after AMD officials announced second-quarter earnings that included a $330 million loss on revenue of $1.18 billion. The numbers beat analyst estimates, but were somewhat disappointing compared with the earnings rival Intel posted a week earlier.

Intel on July 14 announced a $1 billion profit on $8 billion in revenue, though that profit swung to a $398 million loss when the European Commission’s $1.45 billion antitrust fine was factored in.

European regulators levied the fine in May, saying that Intel unfairly used its market dominance to try to quash competition from AMD through rebates and discounts given to OEMs. Intel officially appealed the fine July 22.

Despite the second-quarter numbers, AMD has been on a roll in recent months. The chip maker in June rolled out its six-core “Istanbul” Opteron processor, some five months ahead of schedule, and officials have said the company will hit the timetables for other processors laid out in its product roadmap.

AMD was tripped by product delays and technical glitches in its quad-core “Barcelona” Opterons, but operational changes has allowed the chip maker to better stay on schedule since, particularly with its “Shanghai” and Istanbul offerings.

During the earnings call July 21, CEO Dirk Meyer said AMD will focus on its next-generation Opterons, and that by the fourth quarter, most of the server chips will be produced through the company’s 45-nanometer manufacturing process.

AMD also is scheduled to launch new platforms for laptops in the third quarter, including a new platform for what the company calls "thin and light" notebooks.

Graphics technology, based on AMD’s ATI graphics business, also will be a focus of the second half of 2009.

Windows 7 Release May Put the Brakes on Apple Enterprise Growth

News Analysis: Apple's performance is besting the top companies in the tech industry. Apple market share, bouyed by sales of iPhones and iPods as well as Macintosh PCs, has grown substantially. Enterprises' rejection of Vista may have made it easier for Macs to inflitrate corporate offices. But that could change when Windows 7 is released later this year.


Apple announced its quarterly financial data Tuesday and once again, the company is performing extremely well.

According to Apple, its quarterly profit has risen to $1.23 billion, representing a 12 percent gain year-over-year. It beat Wall Street estimates on revenue and earnings per share. Once again, the company is one of the most profitable firms in the tech industry.

Apple's success is partly due to its vision. The company wasn't content to simply offer computers, so it analyzed the space and delivered compelling products that appeal to consumers across a wide array of markets. There's no debating that Apple has achieved its success in no small part because of the consumer appeal its products provide.
Resource Library:



But is that all? Is Apple enjoying this success solely because of its own vision? It's debatable. A quick glance at the company's financial data tells a slightly different tale: since the release of Windows Vista, Apple has been far more profitable than it was when XP was leading the charge. Granted, that's partly due to the success of the iPod and the release of the iPhone, but is there more to it than meets the eye?

Windows Vista was a nightmare for Microsoft. Designed to be the follow-up to Windows XP and the operating system to carry the Microsoft banner going forward, it failed in the enterprise. Most companies opted to stick with Windows XP out of fear that Vista's hardware requirements were too great. Worse, it suffered from compatibility issues when it was released, causing headaches for some companies when mission-critical applications stopped working on the new operating system.

It got so bad that Dell, HP, and other major vendors gave users the option to exercise "downgrade" rights, allowing customers to buy a Vista PC, but have the vendor install Windows XP instead.

The enterprise had two options after Vista was released: stick with outdated hardware until Windows 7 hit store shelves or venture into uncharted territory by buying Macs and deploying Mac OS X network-wide. For some firms, the latter option was impossible -- they were using applications that only worked with Windows. But other firms weren't tied down to a single operating system and opted instead to try out the Apple products. Since then, Apple's market share has grown consistently.

At the same time, Apple's iPod and iPhone business has grown, as well. Even consumer market share has grown in the same period. Part of that might be due to Windows Vista and Microsoft's many false starts.

But if there is a correlation between Mac sales and Windows Vista, wouldn't there also be a correlation between Mac adoption and Windows 7's success?

Windows 7 probably won't stop Apple's rise in the consumer space. The iPod and the iPhone are contributing heavily to its success and not even Windows 7 can stop that. But in the enterprise, it's entirely different. Those companies that moved to Mac OS X or are considering deploying Apple's operating system might need to think twice. Windows is still the leader in the enterprise for good reason. Unlike Mac OS X, Windows is the operating system platform for almost every software package designed for businesses. It's a more business-friendly operating system. Apple's Mac OS X doesn't enjoy those same benefits.

In the end, it's Windows 7 and its value that will dictate how well Mac OS X will perform in the enterprise going forward. If Windows 7 can live up to the hype, Apple's growth in the enterprise will be stymied. Companies that had thought about getting new hardware to replace their outdated XP computers will need to choose between Windows 7 or Mac OS X. As long as Windows 7 ships to the enterprise with as much value as Microsoft has promised, Mac OS X won't be the chosen operating system. Microsoft will be able to return to absolute dominance in the enterprise.

As long as Microsoft releases operating systems that don't quite match the requirements of the enterprise, companies will think twice about deploying Mac OS X. That's why Windows 7 is so important. If it can live up to its promise, companies will adopt it, they will opt for an HP, Lenovo, or Dell PC instead of a Mac, and Apple's growth in the enterprise will end.

Windows is an extremely powerful operating system. It dictates the enterprise market. It controls how companies do business. And, it seems, it plays a part in Apple's success. But with Windows 7 promising greater appeal than Vista, Apple might have enjoyed its best days in the enterprise.

How to Maximize Performance and Utilization of Your Virtual Infrastructure

Most Fortune 1000 companies today are currently between 15 to 30 percent virtualized. There are still a lot of obstacles to overcome in order to move more virtualization projects forward. The biggest virtualization challenge facing organizations is how to manage the virtual infrastructure. Here, Knowledge Center contributor Alex Bakman explains how IT staffs can dramatically improve performance and utilization efficiencies in their virtualization projects.

Organizations today are rapidly virtualizing their infrastructures. In doing so, they are experiencing a whole new set of systems management challenges. These challenges cannot be solved with traditional toolsets in an acceptable timeframe to match the velocity at which organizations are virtualizing. In a virtual server infrastructure where all resources are shared, optimal performance can only be achieved with proactive capacity management and proper allocation of shared resources.

The biggest challenge is finding the vast amount of time or automated technology to do this. Not allocating enough resources can cause bottlenecks in CPU, memory, storage and disk I/O, which can lead to performance problems and costly downtime events. However, over-allocating resources can drive up your cost per virtual machine, making a ROI harder to achieve and halting future projects.

To address this, organizations should consider a life cycle approach to performance assurance in order to proactively prevent performance issues—starting in preproduction and continually monitoring the production environments. By modeling, validating, monitoring, analyzing and charging, the Performance Assurance Lifecycle (PAL) addresses resource allocation and management. It significantly reduces performance problems, ensures optimal performance of the virtual infrastructure and helps organizations to continually meet service-level agreements (SLAs).Resource Library:




The following are the five components of the PAL. These components allow organizations to maximize the performance and utilization of their virtual infrastructures, while streamlining costs and delivering a faster ROI.

Component No. 1: Modeling

Modeling addresses preproduction planning to post-production additions, as well as changes to the virtual infrastructure. With capabilities to quickly model thousands of "what if" scenarios—from adding more virtual machines to changing configuration settings—IT staff can immediately see whether or not resource constraints will be exceeded and if performance issues will occur. In this way, modeling provides proactive prevention.

Four common modeling scenarios are:

1. See the effect on resource capacity and utilization of adding a new host/virtual machine or removing existing ones.

2. What will happen when a host is suspended for maintenance or a virtual machine is powered down?

3. Pre-testing VMotion scenarios to make sure sufficient resources exist.

4. How will performance be affected if resource changes are made to hosts, clusters and/or resource pools?

Component No. 2: Validating

While modeling "what if" scenarios is an important first step to continually ensuring optimal performance, it is equally important to validate that changes will not have a negative impact on infrastructure performance. Resource Library:




Validation spans between the modeling stage and the monitoring stage of the PAL, because it is equally critical to initially validate performance-impacting changes in preproduction, as well as to continually monitor and validate performance over time. If you cannot validate that a certain change will impact infrastructure performance in either a negative or positive way, there is significant risk to making that change.

Component No. 3: Monitoring

The ongoing monitoring of shared resource utilization and capacity is absolutely essential to knowing how the virtual environment will perform. When monitoring resource utilization, IT staff will know whether resources are being over or underutilized. Not allocating enough resources (based on usage patterns and trends derived from 24/7 monitoring) will cause performance bottlenecks, leading to costly downtime and SLA violations. Over-allocating resources can drive up the cost per virtual machine, making a ROI much harder to achieve.

By continually monitoring shared resource utilization and capacity in virtual server environments, IT can significantly reduce the time and cost of identifying current capacity bottlenecks that are causing performance problems, tracking the top resource consumers in your environment, alerting you when capacity utilization trends exceed thresholds, and optimizing performance to meet established SLAs.

Component No. 4: Analyzing

Proactive approaches based on trend and predictive analysis of the data being monitored can significantly reduce fear by providing ample warning (for example, alerting system administrators to potential problems as new conditions materialize). By knowing ahead of time what resource constraints may occur, IT can take the appropriate proactive measures to prevent the problems from happening—providing the necessary confidence to virtualize their critical applications.Resource Library:




There are two layers of analysis that can help deliver the information IT staffs need to instill confidence that their infrastructures will perform. The two layers are trend analysis and predictive analysis.

Trend analysis

While real-time monitoring tools can show spikes in resources consumption, those spikes may not have a drastic impact on performance or may only be one-time events. Trend analysis based on 24/7 monitoring of resource utilization provides visibility into how the virtual server environment is performing over time. Is resource utilization trending higher, lower or staying the same? Is it necessary to add more capacity or is there room to safely add more virtual machines?

Predictive analysis

By leveraging trend analysis and running the data through sophisticated mathematical engines, future problems can be predicted. This allows IT to take preventive and proactive actions now. If you knew in a certain amount of days that a problem may occur, you could prevent 90 percent of these performance problems from ever happening. Threshold alerts could be set to show that, in 30 days, a cluster will begin to run out of storage. By knowing that issue today—as opposed to when it happens—actions can be taken now to proactively increase storage allocations and prevent the future problem.

Sun, Fujitsu Launch Enhanced UltraSPARC Systems into Roiling Unix Market

Sun and Fujitsu are offering improved performance and virtualization features for their UltraSPARC-based servers through the adoption of new processors and LDoms virtualization software. The rollout comes at a time when uncertainty surrounds the Unix space, with Oracle buying Sun and delays in Intel’s next-generation Itanium chip. At the same time, IBM is paving the way for its upcoming Power7 platform.


Sun Microsystems and Fujitsu are rolling out enhanced UltraSPARC-based servers into a Unix market that could see continued shifting over the coming months.

Officials with Sun and Fujitsu July 21 boasted improved performance and virtualization capabilities in the systems, thanks to the addition of the 1.6GHz UltraSPARC T2 and T2 Plus processors and the latest release of Sun’s LDoms (Logical Domains) virtualization software, all of which are supported by Sun’s Solaris 10 operating system.

The enhancements enable enterprises to grow the performance and efficiencies of their data centers without having to increase their expenses, according to John Fowler, executive vice president of Sun’s Systems Group.

“We’ve got massive density already built in,” Fowler said in a statement. “It’s a great choice for both consolidation and the heavy lifting required by enterprise applications.”Resource Library:




The announcement comes at an interesting time for the Unix community. Oracle’s expected $7.4 billion acquisition of Sun—Sun shareholders approved the transaction July 16—brings into question the future of Sun’s Unix-based hardware portfolio, and Intel is still experiencing delays in releasing the next-generation “Tukwila” Itanium chip. Hewlett-Packard has standardized its high-end Integrity systems on Itanium, and it’s those Integrity systems that run HP’s Unix variant, HP-UX.

Twelve reasons why Unix isn't going to disappear -- and three reasons it might.

The announcement from Sun and Fujitsu also came the same day IBM began paving the way to its upcoming Power7 processor platform with the unveiling of an upgrade path from Power6, as well as a new virtualization management tool, called Systems Director VMControl. Power7-based IBM servers are expected to begin shipping in the first half of 2010.

“Unix systems customers currently face unprecedented uncertainties,” Charles King, an analyst with Pund-IT Research, said in a report issued July 22. “Some of those are competitive, with most of the pressure coming from below in the form of increasingly able x86/64-based solutions. New-generation processors designed to support particularly robust virtualization, such as Intel’s Xeon 5500 (Nehalem) chips, are likely to ratchet-up the pressure even higher.”

However, much of the uncertainly is coming from within the Unix space, King said.

“On the RISC side of the market, Oracle’s brewing acquisition of Sun Microsystems has many in the industry questioning the company’s plans for or dedication to Sun’s UltraSPARC technologies,” King wrote. “Even if Oracle supports Sun’s traditional platforms and solutions and customers (as CEO Larry Ellison insists it will), many people doubt Oracle’s ability to effectively run, let alone turn around, Sun’s troubled hardware business.”

Given all that, it looks as though IBM is in the best position among Unix vendors, he said, noting that IBM could make big gains in the Unix space by taking advantage of issues around rivals such as HP and Sun. And while x86-based systems continue to grow as an overall percentage of the global server market, Unix-based systems still accounted for 33 percent—about $3.3 billion—of the overall server revenue in the first quarter of 2009, according to research firm IDC. That was up from 30.2 percent the first quarter of 2008.

Despite the questions surrounding the future of Sun hardware, Fujitsu officials said they are seeing continued adoption of the UltraSPARC-based servers across a wide range of companies, from smaller startups to larger enterprises.

“With the enhancements we’re announcing … we will be able to offer customers even greater performance and virtualization capabilities,” Noriyuki Toyoki, corporate vice president at Fujitsu, said in a statement.

Through the combination of LDoms 1.2 and Solaris, businesses get built-in configuration tools for a more streamlined setup of LDoms, as well as CPU power management through the automatic powering off of processing cores not in use.

Other capabilities include greater support of jumbo frames, which let businesses send more data across the network at one time, dynamic migration of domains, built-in recovery through automatic LDoms backup, and a physical-to-virtual migration tool for businesses looking to move from existing legacy SPARC/Solaris systems to the newer CMT (chip multithreading) servers.

Emerging Markets a Key for IT Hardware Vendors: Gartner

In a recent survey, research firm Gartner found that larger enterprises in emerging markets like Brazil, Russia, China and India were more likely to increasing investments in IT hardware—including storage, servers, PCs and printing devices—than their counterparts in mature markets. They also are increasing investments in such as areas as virtualization, green IT and—to a lesser extent—cloud computing.


Emerging markets hold a lot of promise for IT hardware vendors, according to research firm Gartner.

In a report issued July 22, Gartner analysts said that in 2009, IT hardware spending growth rates in emerging markets will be larger than those in more mature markets.

In addition, spending in emerging markets on virtualization and cloud computing technologies also will increase, they said.

The trends were found in the results of a survey of 951 IT professionals in large enterprises worldwide. The results should impact where IT hardware vendors and their channel partners put their money and efforts, according to Gartner analyst Luis Anavitarte.Resource Library:




“These survey results are very important for technology and service providers, not only because they validate where the IT growth trend is occurring in emerging markets, but also because they can guide planning and resource allocation processes,” Anavitarte said in a statement. “This should also have an impact on hardware vendors’ channel strategies addressing large enterprises, particularly in Brazil, Russia, India and China.”

Overall, the survey found that 66 percent of those responding said there either would be an increase in IT budgets this year or no change from last year.

In each of four categories—storage, servers, PCs and printing devices—a greater percentage of respondents in emerging markets said they planning to increase spending on IT hardware or keep the same, as compared with their counterparts in mature markets.

For example, 33 percent of those in emerging markets said they planned to increase spending on storage hardware, compared with 27 percent in mature markets. In servers, 30 percent in emerging said they planned to spend more on servers, compared with 27 percent in mature markets.

For PCs, the split was 32 percent to 19 percent, while it was 28 percent to 16 percent for printing devices.

In addition, those saying their spending was going to remain the same as in 2008 was higher in each category for emerging market enterprises than those in mature markets.

In addition, 35 percent of respondents in emerging markets said they planned to increase investments in virtualization, 32 percent said they would do the same in green IT, and 7 percent said they’d invest more in cloud computing.

There are several reasons for this trend in emerging markets, including that many of these enterprises have IT plans in place that includes renewing hardware, as well as that they more financial resources than their SMB brethren and rely less on borrowing money to cover their IT operations.

In addition, according to Gartner, these larger enterprises more often play on an international stage and need to have top IT resources to compete on an international level.

Regarding virtualization, Gartner analysts are seeing more enterprises in emerging markets beginning to adopt the technology, and that interest in green IT will continue to grow in accordance with governmental regulations.

Cloud computing continues to be a new computing model in the emerging markets. According to the survey, half of the respondents in the emerging market organizations had not heard of cloud computing or had heard of it but didn’t know what it meant. Not all markets were like that, however. In Brazil, 28 percent of channels are delivering software as a service.

Storage virtualization software maker VMware reports profits of $33 million or 8 cents per share, compared with $52 million or 13 cents per share a ye

Microsoft announces that Windows 7 and Windows Server 2008 R2 have been released to manufacturing. In addition, Microsoft says it plans to release a family pack that would allow Windows 7 Home Premium to be installed on up to three PCs. Microsoft is presumably hoping that a high rate of adoption for its new products will improve its flagging finances.


Microsoft announced the release of both Windows 7 and Windows Server 2008 R2 to manufacturing on July 22.

The two platforms represent a major part of Microsoft's grand strategy, as it seeks to capitalize on technological trends such as virtualization that are rapidly changing the face of IT. They also present a substantial chance for revenue generation during a period when the company finds itself fighting a substantial economic headwind. The release-to-manufacturing announcement came one day before Microsoft's planned quarterly earnings call on July 23.

In a July 21 corporate blog entry, Microsoft also confirmed that it would release a "family pack" for Windows 7 Home Premium in certain markets, which will allow installation on up to three PCs. Resource Library:




"We have heard a lot of feedback from beta testers and enthusiasts over the last three years that we need a better solution for homes with multiple PCs," Brandon LeBlanc, a Windows communications manager at Microsoft, wrote in the blog entry.

For the enterprise, Windows Server 2008 R2 is designed to take advantage of Microsoft's Hyper-V technology in order to the growing trend toward virtualization. The server's 64-bit architecture takes advantage of virtualization's hungrier memory needs, and also includes features such as Live Migration, which can transparently move running guest systems between nodes inside a failover cluster without risk of dropping the network connection.

Click here for more information on Windows Server 2008 R2.

Through Windows Server 2008 R2, virtual machines support hot plug-in and hot removal of both virtual and physical storage without the need to reboot the physical host system. Processing to the physical host, including TCP/IP operations, is included in the Hyper-V abilities.

"We feel that this release specifically provides the catalyst for the customers who haven't embarked on the virtualization journey," Mike Schutz, director of product management for Microsoft's Windows Server Division, said in an interview with eWEEK. "Hyper-V does provide a low bar for entry as well as the ability to scale up to larger environments."

Microsoft is presumably hoping for quick adoption by businesses in order to provide a much-needed boost in revenue for the remainder of 2009. Earnings for the current quarter have been estimated at 36 cents a share on revenues of $14.37 billion, a 9.3 percent drop from the same quarter in 2008, when the company reported income of 47 cents a share on $15.84 billion of revenue.

In April, Microsoft posted its first-ever quarterly revenue decline, which saw its Windows-centric Client division's revenue drop by 16 percent and income by 19 percent year over year. If Windows 7 and Windows Server 2008 R2 are substantial hits it could help negate the downward trend, especially if consumers and businesses are compelled to engage in a tech refresh.

VMware Profits Down 36.5% but Execs Optimistic for Rest of 2009

Storage virtualization software maker VMware reports profits of $33 million or 8 cents per share, compared with $52 million or 13 cents per share a year ago. Overall revenue was flat at $456 million, but VMware's CEO and CFO are optimistic that good numbers lie ahead.


Enterprise virtualization software kingpin VMware on July 22 reported a 36.5 percent falloff in its second-quarter profit from the same period a year ago, but CEO Paul Maritz and his fellow executives remained optimistic about the company's prospects for the next six months.

VMware, which is owned and operated as an independent subsidiary by storage giant EMC, reported profits of $33 million or 8 cents per share, compared with $52 million or 13 cents per share a year ago. Overall revenue was flat at $456 million.

It is widely estimated that VMware's hypervisor, which enables data storage to be consolidated and carved up into workload units not bound by physical disk capacity, is used in about 80 percent of the world's enterprise IT systems.Resource Library:




"We managed to return a solid quarter, despite a very large product transition," Maritz said. "As far as our customers and ecosystem partners are concerned, it's been very positive.

"About 1,000 ecosystem partners, from very small ISVs to very large server vendors, have been working hard on getting their certifications for their products [on VMware's ESX and VSphere hypervisors] and releasing new products that use the VSphere foundation. This will all help build future bridges to the cloud. And it speaks to the product maturity we have."

VMware Chief Financial Officer Mark Peek told analysts and journalists on a conference call that VMware expects revenue to be slightly better at between $465 million and $480 million in the third quarter of 2009. Wall Street analysts' estimates are in the neighborhood of $474 million.

In after-hours trading July 22, VMware's stock price climbed about 7 percent to $33.60.

Cautious but optimistic

"Even though we remain cautious about the global economic conditions, we are beginning to get a somewhat better visibility into our business," Peek said.

Peek also said VMware expects revenue for its fiscal year 2009 to increase by 1 to 3 percent over its 2008 sales of $1.88 billion.

According to Peek, VMware's second-quarter services revenue increased by 32 percent from a year ago to $228 million, while license sales fell 20 percent, also to $228 million. A couple of major U.S. military-sector service deals were keys to the service revenue increase during the quarter.

rBuilder 5 Streamlines Linux-Based Appliance Deployment

The 5.0 version of rBuilder boasts several major new features. eWEEK Labs' tests of the platform, through Version 5.2.1, shows that rBuilder makes it easier to churn out virtual machine images for immediate deployment, and that the Web-based management interface that rBuilder pairs with the appliances it creates is handy. However, Labs did run into some configuration issues, as well as some issues with the new Flash-based Web front end.


With its rBuilder 5.2.1, rPath aims to streamline the deployment and maintenance of application workloads by providing IT organizations with the tools to roll their applications into Linux-based software appliances that are ready to deploy on popular server virtualization platforms, cloud computing services or bare-metal systems.

Rather than manage the operating system, application and virtual container layers in separate processes, rBuilder enables organizations to fold these operations into a single system that pairs applications with "just enough" operating system components to meet their needs; that packages the application-plus-OS bundles into the formats required by various hosting platforms; and that keeps these appliances up-to-date with security and bug fix patches.

rBuilder Version 5.0, which was released in April, introduced several major new features, including additional Linux distribution options; a new, Flash-based interface; and a new management console through which administrators can directly manipulating appliances on various virtualization environments.

In my tests of rBuilder, which began with a pre-5.0 release of the product and ran through the current, 5.2.1 version, I was impressed by the ease with which I could churn out virtual machine images for immediate deployment on the Amazon EC2 and VMware ESX environments that I tapped for testing. I also appreciated the handy Web-based management interface that rBuilder pairs with the appliances it creates.

However, I found the process of getting my chosen applications configured properly much more complicated than the product's point-and-click graphical interface might suggest.

For my tests, I worked primarily with the Mediawiki application that powers Wikipedia--an application that I know can be implemented very well with rPath's tools because the company offers a freely available Mediawiki appliance for download from its site. The rPath-built Mediawiki appliance boasts an initial setup process that's folded into the appliance's Web management interface, and a slick backup option that covers both uploaded files and the Mediawiki database.Resource Library:




Building a Mediawiki appliance on my own was a much less streamlined affair. For example, while rBuilder managed to detect automatically and provide most of the OS dependencies that my test applications required, the product didn't catch everything on its own, and I couldn't tell if rBuilder had missed any required components without building and launching my appliances first. To get everything configured properly, I ended up having to cycle through the define, build and launch process many times, and spend time learning about rPath's conary recipe language to tweak my package definitions.

With that said, rBuilder is well worth evaluating, and rPath makes evaluations fairly easy to conduct. rBuilder is available in hosted and on-premises versions, and both flavors are freely accessible.

The on-premises version of rBuilder is free for use with up to 20 running virtual instances. The hosted version of rBuilder, called rBuilder Online, is completely free, but all appliances built and stored on rBuilder Online are publicly accessible.

Multiple Linux Platforms

rPath maintains its own Linux distribution, rPath Linux, from which rBuilder can pluck the components required to build software appliances. rPath Linux is a fairly conservative distribution that's capable of serving most Linux applications without issue.

However, for applications designed or certified to work on a specific distribution, using rPath's own Linux can pose support hurdles. It's in these cases that rBuilder's support for Linux distributions beyond rPath Linux comes in handy. rBuilder offers the choice of SUSE Linux Enterprise Server 10 or 11, Ubuntu Hardy or the Red Hat Enterprise Linux 5 clone, CentOS 5. For the SLES options, you must configure rBuilder with an activation key confirming that you're entitled to run the distribution.

When I embarked on my appliance creation journey, rBuilder prompted me to choose one of these distributions. Later, I could easily switch platforms through the product's Flash-based interface. I switched appliances from rPath Linux 2 to CentOS and vice versa.

Virtualization Target Support

Also new in the 5.x versions of rBuilder is a management console through which I could configure virtualization host targets to link up with rBuilder. I could choose from on-premises VMware ESX Server or Citrix XenServer hosts, or the cloud-based Amazon EC2 or the Globus Workspaces Cloud. I tested with a VMware vSphere installation and with an Amazon EC2 account. In both cases, I could see a list of the running instances on the services, as well as launch or terminate new instances from rBuilder.

I could also create virtual images in a fairly comprehensive range of other formats, including those for Microsoft Hyper-V, Virtual Iron, Parallels, QEMU, installable DVD or CD ISOs and plain TAR archives.

Flash-based Interface

Among the most striking changes between the 4.x and 5.x versions of rBuilder is a move from an HTML and Javascript-based Web interface to a new Web front end built on Adobe's Flash framework. The Flash interface gives rBuilder a look and feel more akin to a regular desktop application, while retaining the cross-platform support of the HTML interface.

Overall, my experience with the new interface was positive. In my first experiences with the new UI, just after Version 5 became available, I was tempted to say that rPath had pushed the envelope a bit too far in terms of what’s feasible with a Flash-based application, but the company has managed to iron out most of the early wrinkles I encountered.

For example, while testing earlier 5.x builds of rBuilder, I experienced some performance issues with the Flash-based interface, which tended to result in my browser--and all its open tabs--locking up for short periods of time. Specifically, I experienced these problems while connecting to VMware ESX server targets. With Version 5.2.1 of rBuilder, those particular lockup issues seemed to have been ironed out.

However, there were some Flash issues even in Version 5.2.1. In one case, I triggered a build of one of my appliance images, but the operation wasn't reflected in the interface. I clicked a couple more times to launch the build, but it wasn't until I refreshed the page that I could see that each of my clicks had indeed added a new build process to the product's queue. The interface offered no option to cancel the redundant operations, so I had to either wait for them to finish or visit a separate rBuilder administration console to cancel them.

EMC Profits Fall 43 Percent

Storage giant's earnings dropped to $205.2 million from $360.1 million in Q2 2008. Overall income was down 11 percent to $3.26 billion. However, CEO Joe Tucci was optimistic, saying he believes that a return to higher numbers may not be far away.


Storage giant EMC, which had enjoyed double-digit profits for 21 quarters up until this year, reported July 23 that it lost ground in Q2 2009 as its profit fell 43 percent from a year ago.

The company's virtualization subsidiary, VMware, reported a 36 percent drop in profits July 22. However, executives from both companies were optimistic, saying they believe that a return to higher numbers may not be far away.
Resource Library:



EMC's earnings dropped to $205.2 million [10 cents per share] from $360.1 million, or 17 cents a share, in Q2 2008. Overall income was down 11 percent to $3.26 billion.

Revenue from VMware added $455 million to EMC's total. As VMware had said a day before, EMC Chief Executive Joe Tucci said that he believes that market stabilization may be near than many people think.

"When IT markets resume to more normal spending rates, we expect EMC will return to generating double-digit revenue growth," Tucci said told a conference call of analysts and journalists.

But in offering some guidance on the company's prospects, EMC signaled that the tech market is at least returning to more predictable conditions.

"While global conditions remain challenging and our full-year view of declining IT spending remains unchanged, EMC's second-quarter financial performance reflects customers' budget stabilization and improved business predictability," EMC Chief Financial Officer David Goulden said during the conference call.

"We now have better visibility and more confidence in the second half of 2009," Goulden said.

In its 2009 guidance, EMC forecast revenue of $13.8 billion, including its pending $2.2 billion acquisition of Data Domain. Thomson Reuters analysts projected 78 cents and $13.49 billion, Reuters reported.

Thursday, July 9, 2009

Sun VirtualBox Virtualization Ready for Data Center

Sun’s VirtualBox virtualization platform, which until now could only run on a single x86 CPU and was good only for desktop applications, can now create and support up to 32 virtual CPUs in a single virtual machine, making it capable of handling server workloads like databases, and putting it in closer competition with virtualization technology from VMware, Citrix and Microsoft. In addition, Sun has improved the graphics capabilities in VirtualBox for desktop applications.


Sun Microsystems’ VirtualBox virtualization platform is now ready for the data center.

VirtualBox 3.0, released by Sun June 30, can now run multiprocessor virtual machines for high-end workloads, according to company officials. Where the product in the past could only run on a single x86 processor, the new version can host up to 32 virtual CPUs in a single virtual machine, enough to accommodate such server-based workloads as databases and Web applications.

VirtualBox, which takes advantage of virtualization technology in x86 processors from Intel and Advanced Micro Devices, can now work in the data center as well as the desktop.Resource Library:




"The rapid evolution and proliferation of VirtualBox software continues," Jim McHugh, vice president of marketing for data center software at Sun, said in a statement. "With each new version, VirtualBox software delivers more innovation, performance and power. And as virtualization continues to gain momentum in the market, the world's developers and IT decision makers are turning to VirtualBox en masse."

The new capabilities bring Sun’s virtualization platform into the realm of those from VMware, Citrix Systems and Microsoft.

Sun, which gained the VirtualBox technology through its 2008 acquisition of Innotek, has rapidly ramped the platform’s capabilities, rolling out beta versions less than a month ago.

Along with the new server capabilities, Sun engineers have enhanced the platform’s desktop features, including improved graphics through added Microsoft Direct3D support for Windows guests. In addition, VirtualBox 3.0 supports Version 2.0 of the OpenGL (Open Graphics Library) standard, enabling high-performance Windows, Linux, Solaris and OpenSolaris graphical apps to run software that normally would need graphical hardware acceleration.

VirtualBox 3.0 also supports a wider range of USB devices, including storage devices, Apple iPods and cell phones.

Cisco, VMware Look to Move VMs Between Data Centers

Cisco and VMware are working on a proof-of-concept around the idea of using VMware’s VMotion technology to move live virtual machines between multiple data centers, a capability that would aid in such areas as load balancing, data center maintenance and disaster avoidance. The two companies demonstrated the proof-of-concept during the Cisco Live show. However, VMware officials warn that more work needs to be done to make the concept a reality.


Cisco Systems and VMware are developing ways that enterprises can use VMware’s VMotion technology to move live virtual machines from one data center to another.

The two companies showed off a proof-of-concept at the Cisco Live 2009 show in San Francisco, and demonstrated the capabilities during Cisco CTO Padmasree Warrior’s keynote address July 2.

The project is still in the proof-of-concept stage, but VMware official Guy Brunsdon said in a recent blog post that moving live virtual servers to other locations over a WAN holds promise for businesses in a number of areas.

In particular, the capability would help enterprises in load balancing compute resources over multiple sites, Brunsdon said in his blog posted June 29. Businesses also could save power and cooling costs by being able to dynamically consolidate VMs to fewer data centers, he said.Resource Library:




In addition, businesses could avoid downtime during maintenance procedures in data centers by migrating applications offsite, and they also could more easily avoid natural disasters by proactively migrating important application running on VMs to another facility.

VMotion has worked well in migrating live VMs from one host to another. In addition, VMware offers disaster recovery capabilities with its vCenter Site Recovery Manager, which enables businesses to improve their disaster recovery capabilities through automating recovery steps, testing recovery plans without interrupting the VMs, and providing steps for building and managing disaster recover plans.

However, there are particular challenges to the idea of moving live virtual servers from one site to another, Brunsdon said.

“This, of course, is a non-trivial thing to do,” he wrote. “There is the challenge of moving a VM over distance (which involves some degree of additional latency) without dropping sessions. To maintain sessions with existing technologies means stretching the L2 domain between the sites—not pretty from a network architecture standpoint. And then there is the storage piece. If you move the VM, it has to remotely access its disk in the other site until a Storage VMotion occurs.”

For example, both the data center maintenance and disaster avoidance scenarios would require a Storage VMotion to move the disk image to the other data center.

Cisco and VMware engineers last year began working on the idea of moving VMs over long distances between multiple data centers, Brunsdon said. The joint Cisco-VMware lab in San Jose, Calif., has run several tests of disparate distances, he said. The demonstration at Cisco Live covered a distance of about 50 miles, he said.

According to a diagram of the San Jose-to-San Francisco test, the San Jose site includes VMware ESX servers and Catalyst 6500 switches from Cisco. At the San Francisco site were ESX servers and Cisco’s Nexus 5000 and 7000 switches.

Linking the two sites was an 80-kilometer single-mode optical fiber.

Gordon Haff, an analyst with Illuminata, said disaster recovery is a benefit that VMware has been touting with virtualization for several years.

A key benefit is the ability to create a disaster recovery plan that doesn’t entail spending the money to buy compute resources and having them sit idle in case of an emergency, Haff said. Virtualization enables businesses to work with the systems they have and use VMs for disaster recovery needs.

“You can use most resources normally most of the time,” he said. “But in the case of a problem, you can shift resources, but you don’t have a lot of idle resources.”

Five Continuing Trends in Data Storage

As we do at six- or 12-month intervals here at eWEEK, we offer a short list of key continuing trends in data storage, based upon daily conversations with storage vendors, analysts, data center managers, CIOs and CTOs -- even a few former industry executives now blissfully retired and simply watching this evolution with continued amazement.


Data storage historically has been thought of as a solid, super-important but not-very-exciting sector of IT. Well, "not-very-exciting" is a value judgment made strictly in the mind of the beholder, and storage certainly is not a newsless valley in the overall IT landscape.

New products with a connection to data storage, data disaster recovery, deduplication, thin-provisioning, capacity management and a slew of others are constantly coming into the market -- from established companies and newbies alike. Storage media, including spinning disk hard drives, solid-state NAND and NOR Flash, digital tape and optical disks continue to become more capacious and reliable as engineers and manufacturers improve upon improvements.

As we do at six- or 12-month intervals here at eWEEK, we offer a short list of key continuing trends in data storage, based upon daily conversations with storage vendors, analysts, data center managers, CIOs and CTOs -- even a few former industry executives now blissfully retired and simply watching this evolution with continued amazement.
Resource Library:



Ever-increasing capaciousness in the hardware: Capacities in new-generation hard disks, NAND and NOR Flash, digital tape and optical disks continues to skyrocket, thanks to brilliant engineering. As millions more transistors are crammed onto silicon chips at Intel, AMD, Samsung and other processor-makers, increasing storage space is being created for all the forms that hold bits and bytes. There's a physical limit, but we're not anywhere near it yet, experts say.

For example, laptops with 1TB storage drives are only months away from general availability.

Virtualization of formerly siloed storage systems: This trend started with testing and quality assurance work back in the mid-2000s but is now trending very quickly up. Many of these siloed systems -- especially in larger enterprises -- are still in transition, but industry analysts now estimate that some sort of virtualization is now being used in production in nearly 90 percent of all enterprise IT systems. Only two years ago that percentage was in the 20s.

Standardization of deduplication in Tier 2 and Tier 3 storage: Where new-generation deduplication was a new and more-or-less experimental feature three years ago and being offered by only a handful of storage providers (two of them were Avamar, now property of EMC, and RockSoft, bought by ADIC, which was in turn bought by Quantum), it is pretty much a standard requirement now.

Data deduplication, one of the most important breakthroughs in IT in the last two decades, eliminates redundant data from a disk storage device in order to lower storage space requirements, which in turn lowers data center power and cooling costs and lessens the amount of carbon dioxide produced to generate power to run the hardware.

What's not to like about dedupe? If you said or thought "nothing," you're right.

Online backup storage: Small and medium-size businesses and departments of large enterprises alike are now signing on in increasing numbers to services such as Mozy.com, Carbonite, Box.net, Amazon S3, CommVault, Asigra, iDrive, Iron Mountain Digital, Seagate EVault, and others. It took a couple of years for trust to become established -- and trust is still by far the biggest issue -- but reports of serious data loss have been relatively few and far between.

It won't be long before every laptop and netbook sold will feature a pre-install that will include online backup and virus protection. EMC is already providing this with its Atmos service for its Iomega desktop storage drives.

Secure, private cloud storage: Don't confuse this with online backup. In the last eight months, EMC, Sun Microsystems, IBM, Symantec, CA and ParaScale joined the quickly expanding market for software that enables companies to build their own private cloud computing environments. Those vendors introduced separate do-it-yourself cloud-building platforms, sparking a trend that includes such businesses as 3tera and Citrix, in addition to lesser-known smaller companies such as Nirvanix, Bycast, and Cleversafe.

FAA Gets Its New Virtualized Flight Plan System Off the Ground

EXCLUSIVE: The FAA, which has suffered a series of embarrassing flight plan system crashes during the last several years, has upgraded its legacy flight plan filing system to a new open-systems server and storage infrastructure supplied by Stratus Technologies. This architecture is now replacing critical systems that directly affect all air travelers in the United States.

The people whose job it is to schedule aircraft for takeoff, help guide passengers to their destinations and get them safely back down on the ground finally have some powerful new open-standards computer systems up and running to help them do their work more reliably.

The Federal Aviation Administration has endured a lot of grief in the last 24 months due to some well-documented crashes of its national flight plan-filing system. But the nation's No. 1 aerospace agency is finally bringing its Cold War-era mainframe IT systems into the 21st century.

Last year, the FAA upgraded its legacy internal business systems to a new open-systems server and storage infrastructure supplied by Sun Microsystems and an IP network provided by Cisco Systems. These systems currently handle all the agency's nonflight-related administrative functions, including the FAA's human resources information, e-mail, messaging, internal document routing and storage. The open systems worked well there, and the idea was to transfer the same kind of system to the all-important national flight-plan function.

NADIN's (National Airspace Data Interchange Network's) old mainframe-based system, an integral part of the overall NAS (National Air Space) traffic system that processes an average of 1.5 million messages per day, was obsolete and was beginning to break down due to technical issues. Travel disruptions due to these breakdowns are not out of the ordinary, according to knowledgeable air industry sources.Resource Library:




As a result, industry analysts and a number of former FAA staff members worried about major air traffic stoppages, as was demonstrated three times last summer by the crash of the system head in Atlanta. They also were concerned about increasing vulnerability to terrorist cyber-attacks.

An example of this happened on Aug. 26, 2008, when a corrupt file entered the flight plan system and brought it down for about 90 minutes during a high-traffic period late in the day on the East Coast. This was not an isolated incident, as the FAA's chief administrator originally had told the media. Similar crashes occurred on Aug. 21 and in June 2008, FAA records show.

International intelligence analytical firm Stratfor reported a similar system outage back in 2000. Another was reported in June 2007 in addition to the Aug. 21 and Aug. 26 crashes. Those are the ones we know about; we don't know how many others were never made public information.

"The lack of redundancy and dynamism demonstrated ... by the latest NADIN crash makes a cyber-attack against critical U.S. infrastructure all the more feasible," Stratfor said at the time in an editorial commentary.

But all of these issues may now be in the past. It took a grand total of about five years, but the FAA has done its research, found several million dollars to pay for new hardware, software and services, and is well into the process of updating all of its systems.

"We've just about finished our transition from the legacy system over to the new system," FAA IT administrator Jim McNeill told eWEEK. "The main new system is for NADIN, built on Stratus Technology servers with virtualization, and handles all the legacy [mainframe] functions as well as new FAA-owned IP systems."

Key Requirement: Separate Data Flows

McNeill said there was a key requirement that had to be met in order for the new system to comply with FISMA (Federal Information Security Management Act of 2002) regulations: The FAA had to separate government-created data from non-government data.

"We were required to provide a separate server to support public data flows, due to the inherent security issues in TCP/IP," McNeill said. "In this interpretation, 'public data flows' means non-NAS systems. In the nature of our business, a lot of our clients are non-NAS systems; we're dealing with airlines, we have connections to 26 international agencies—these are all non-NAS systems. Basically, they're all private companies who provide value-added services to general and commercial aviation.

"What we're doing is providing a portal into the FAA system for these general and commercial aviation companies to file all flight plans, and keeping it separate from everything else."

The new, virtualized system—the first for the FAA—is built on new heavy-duty Stratus FTserver 6400s, which run on Intel Xeon quad-core processors. The system was designed by Lockheed Martin engineers, replacing two 21-year-old Phillips DS714 mainframes—located in Atlanta and Salt Lake City—that first went live in 1989 and have been cranking away ever since.

Overall, the old Phillips mainframes did yeoman's work on a 24/7 basis for two decades—ingesting, storing and processing an average of 1.5 million data points per day. The system and its designer deserve kudos for working all those years, but just like people, every system needs to be replaced at some point.

SMP-Enabled Sun xVM VirtualBox 3.0 Turns Up the Heat on VMware

Version 3.0 of Sun's xVM VirtualBox desktop virtualization tool adds support for multiple guest processors--a major feature addition that, when considered alongside the product's low cost (free) and broad host platform support, is certain to give VMware Workstation a run for its money.


Sun Microsystems' xVM VirtualBox, a no-cost virtualization tool that enables virtual machines to run on a variety of standard operating systems, continues to improve its position as a potential challenger to workstation products from VMware and Parallels.

Sun released Version 3.0 of xVM VirtualBox on June 30 and added symmetric multiprocessing (SMP) as the major new feature.

I tested VirtualBox 3.0 on a Sun Fire x4170 server running Windows Server 2008 64-bit and equipped with 12GB of RAM and two quad-core Intel Xeon x5570 "Nehalem" processors. On this machine, I was able to create guests with up to 16 virtual CPUs by taking advantage of hardware-enabled hyperthreading. I also tested it on a Lenovo T400s laptop running Windows Vista and equipped with an Intel Centrino Core 2 Duo CPU and 2GB of RAM, and on a Mac mini running OS X to run Windows XP.Resource Library:




Check out images of VirtualBox 3.0 here.

In all cases, xVM VirtualBox installed and ran without problems. When I tried to assign virtual processors to a guest on the Lenovo notebook I was warned to enable I/O APICs (Advanced Programmable Interrupt Controllers) to avoid IRQ sharing.

xVM VirtualBox did not prevent me from assigning more virtual processors than were available on the physical host. In the case of the Sun Fire x4170 I was able to assign 32 virtual cores to a guest even though that was twice the number of available cores. And even that was "cheating" by using hyperthreading to double my eight physical cores. The user documentation clearly indicated that virtual cores should not exceed actual available physical cores.

Aside from SMP, xVM VirtualBox consists mostly of tweaks to existing features, including experimental support for hardware 3-D acceleration by supporting DirectX 8/9 and OpenGL programming interfaces. While "no cost" is the most compelling reason to look at xVM VirtualBox, the addition of SMP support along with the relatively quick tempo of product development—Version 2.2.4 was released at the end of May—recommends the product as a serious platform of IT pros.

Beyond the feature additions and improvements, there is a long list of bug fixes that include patches for various guest performance problems and for issues regarding the way VirtualBox handles the importing and exporting of OVF (Open Virtualization Format) virtual appliances. For the full list of VirtualBox 3.0 changes, see http://www.virtualbox.org/wiki/Changelog.

In addition to Windows and OS X, VirtualBox supports Linux, Solaris and OpenSolaris as host operating systems. VirtualBox is available for free download at http://www.virtualbox.org/wiki/Downloads.

Vizioncore Offers Data Protection Pack

Cost-conscious businesses might look to Vizioncore's data protection pack, which is available in two bundles and is specifically targeted at smaller deployments.



Virtualization data protection and management solutions company Vizioncore, a subsidiary of Quest Software, announced the availability of SMB Data Protection Pack, a new sales offering aimed at small to medium-size businesses that have adopted, or are considering, VMware vSphere Essentials.

The SMB Data Protection Pack provides SMB customers with solutions such as data protection, high availability and offsite replication that will extend, enhance and complement the entry-level, all-in-one VMware vSphere Essentials offering. The company said the new pack is specifically targeted at smaller deployments and includes licenses for six CPUs, equivalent to the VMware vSphere Essentials license. Resource Library:




The SMB Data Protection Pack is available in two bundles. The primary bundle includes Vizioncore vRanger Pro for data protection and Vizioncore vControl for high availability, while the comprehensive edition, Vizioncore SMB Data Protection Pack with Replication, offers vRanger Pro and vControl, but also adds the additional protection and security of Vizioncore vReplicator for offsite disaster recovery replications. Both bundles are exclusively designed to be implemented with a single deployment of VMware vSphere Essentials.

“Procuring good technology for an SMB can often mean paying for solutions that may never be fully utilized,” said Vizioncone’s vice president of products Tyler Jewell. “However, when teamed with VMware vSphere Essentials, Vizioncore’s new SMB Data Protection Pack provides a truly pragmatic and unique set of solutions that directly results in easier management and greater protection of IT systems and, ultimately, richer ROI from virtualization initiatives.”

Bas Ter Heurne, sales manager for PQR, a specialist for professional ICT infrastructures, focusing on storage, virtualization and application delivery solutions, said the new solution pack from is a great way for small businesses to build their virtual infrastructures by obtaining needed management functionality in a cost-effective way.

“VMware vSphere is undoubtedly a great step forward in the world of technology development and there have been many improvements over the last version. However, the SMB Data Protection Pack from Vizioncore offers SMBs sophisticated enterprise-level features and functions in products that are both easy to master and affordable,” he said. “Any business now can really leverage their investments in virtualization for major gains in productivity and flexibility, as well as strategic benefits in such areas as data protection.”

In June, Vizioncore announced updates to two of its key products, vOptimizer Pro 2.2, the company’s storage and VM optimization tool, and vFoglight 5.2.6, an updated version of their performance monitoring solution. With vFoglight, administrators can view their infrastructure through detailed architectural representations and use out-of-the-box alerts and advice to detect, diagnose and resolve problems affecting performance and availability, while vOptimizer gives SMBs the ability to scan their organization’s VMware vCenter Server and ESX Hosts in order to determine the amount of over-allocated virtual storage that exists unnoticed.

EMC Leapfrogs NetApp, Ups Bid for Data Domain to $2.1 Billion

With the fear of federal antitrust challenges in the rearview mirror, EMC is upping its offer for data deduplication specialist Data Domain to $2.1 billion. The increased offer is the latest move in a back-and-forth between EMC and NetApp, which last offered $1.9 billion for Data Domain. However, the tug-of-war could be coming to an end, some analysts say. EMC is offering more money in its all-cash deal than NetApp, and the FTC’s decision to remove all antitrust impediments means EMC can close the deal faster than NetApp. NetApp's CEO said the company is weighing its options.


EMC, given the green light by federal regulators to pursue its acquisition of Data Doman, is upping its bid for the storage deduplication company.

EMC is growing its offer for Data Domain by more than 11 percent, to about $2.1 billion, in a bid to push the issue and to thwart an attempt by rival NetApp to buy the company.

In a letter dated July 6, Joe Tucci, chairman, president and CEO of EMC, told Data Domain Chairman Aneel Bhusri that not only is EMC willing to up its offer, but that it also can close the deal within two weeks—earlier than NetApp’s proposal—and removes any deal protection provisions that could have slowed the process. Such provisions would include a break-up fee obligation, for example.

“This last point is very significant to you and your stockholders,” Tucci said. “Data Domain does not have any justification for continuing deal protection provisions for NetApp or any other party given our willingness to proceed without them. It was questionable agreeing to deal protections in your initial agreement with NetApp, when you knew of our interest in acquiring the company. There is no basis for continuing with them now.”Resource Library:




EMC’s increased bid is the latest move in a drama that started in May, when NetApp offered to buy Data Domain for $1.5 billion. The two suitors have gone back and forth over their attempts to buy Data Domain, though it appears that the competition may be coming to an end, according to Gartner analyst Roger Cox.

“It would be hard to see [the Data Domain board] turning this down,” Cox said. “From a pure cash point of view, EMC has more cash than the other guy does.”

EMC is offering more money than NetApp, and is offering an all-cash deal. That combined with the decision by the U.S. Federal Trade Commission—which had reviewed the proposed deal for antitrust issues—to remove all regulatory challenges and give a quick go-ahead to EMC to pursue the deal is enabling Tucci to promise Data Domain shareholders a quick closing to the deal.

NetApp initially offered $1.5 billion for Data Domain, then upped its offer to $1.9 billion after EMC’s first proposal of $1.8 billion. NetApp’s offer is a combination of cash and stocks.

“We continue to believe that a business combination with EMC will deliver substantial and superior benefits to your company’s stockholders, customers, employees and partners,” Tucci said in his letter to Bhusri. “Since June 1st, when we submitted to you our prior proposal, we have received wholehearted support from many of your stockholders and customers validating our confidence in these benefits.”

In a written response, Dan Warmenhoven, chairman and CEO of NetApp, said the company is reviewing what its next steps should be.

"In response to EMC's revised, unsolicited offer, the NetApp Board of Directors will carefully weigh its options, keeping in mind both its fiduciary duty to its stockholders and its disciplined acquisition strategy," Warmenhoven said.

At the center of the bidding war is Data Domain’s market-leading data deduplication technology. Data dedupe, a relatively new technology that is getting a lot of attention from enterprises, helps eliminate redundant data from a disk storage device, which in turn lowers storage space requirements and data center power and cooling costs. It also helps businesses reduce the carbon footprints of their data centers.

EMC already offers data deduplication capabilities through an OEM relationship with Quantum, though acquiring Data Domain could put that relationship at risk, Cox said. Acquiring Data Domain would enable NetApp to expand on its deduplication capabilities, he said. NetApp says that data deduplication is a key component of its OnTap operating environment.

In addition, it would expand NetApp’s customer base and help it significantly grow its sales force by 20 to 25 percent, he said. Data Domain currently has a sales force of more than 400 people, according to Cox.

For EMC, a key reason for buying Data Domain would be to keep NetApp from acquiring the company and growing stronger, he said.

Cox said he doesn’t expect any other suitors for Data Domain to emerge. IBM bought deduplication vendor Diligent Technologies last year, and Hewlett-Packard is “a long shot” to make a bid, he said.

CA Expands Support for VMware vSphere, Cisco Virtual Switches

By enhancing several software offerings with support for VMware's vSphere 4 virtualization platform and Cisco's Nexus 1000V virtual switches, CA is looking to ease enterprise management of virtualized and cloud computing environments. CA officials say technologies like those from VMware and Cisco are quickly blurring the line between physical and virtual environments, and enterprises need tools that can allow them to manage both environments from a single place.


CA is looking to make it easier for enterprises to manage their virtualized data centers and cloud computing environments by enhancing the capabilities of several software solutions to support VMware's vSphere 4 virtualization platform and virtualized network switches from Cisco Systems.

CA announced July 6 that is expanding the reach of its Spectrum Infrastructure Manager, eHealth Performance Manager and Spectrum Automation Manager to create a single, fully integrated management offering for physical and virtual server and network environments. In addition, the solution will manage databases, voice and UC (unified communications) systems, and other networked applications.

Key to the enhanced offering is support for vSphere 4 and Cisco's Nexus 1000V virtual software switch, which can be integrated as an option into vSphere 4. Resource Library:




The move is part of CA's Lean IT initiative, designed to help businesses lower IT costs while improving performance and efficiency.

"With the integration of the Cisco Nexus 1000V and VMware vSphere 4, the lines between physical and virtual network and systems management have blurred," Roger Pilc, corporate senior vice president and general manager of CA's Infrastructure Management and Automation business unit, said in a statement. CA support for the two products "will provide a unique, highly integrated, closed-loop approach to business service assurance and automation."

The new support for the VMware and Cisco products in Spectrum Infrastructure Manager and eHealth Performance Manager will result in improved event correlation and root cause analysis, as well as the ability to identify performance issues in virtualized and cloud computing environments before those problems can impact services, according to CA.

They also will provide interactive reporting capabilities for troubleshooting and historical trend reports to help enterprises in their capacity planning.

The software also will offer consolidated hierarchical views of VMware vCenter Server hosts, data centers, compute clusters, and virtual switches and machines, and will provide an improved automated discovery of physical and virtual network and compute systems.

The software also will detect and track VMotion migrations of VMs from one host to another.

VMware Looks to Lure Virtual Iron Users from Oracle

VMware is hoping to entice Virtual Iron customers away from Oracle, which bought Virtual Iron in May. Oracle reportedly is telling Virtual Iron partners that it is halting development of Virtual Iron products and will integrate the technology into its own Oracle VM platform. VMware is offering discounts on products, including its new vSphere 4 virtualization platform, to Virtual Iron customers.


VMware is offering discounts on products, including its new vSphere 4 virtualization platform, to customers of Virtual Iron, the virtualization company bought by Oracle in May.

VMware on June 7 unveiled what it calls “safe passage” to Virtual Iron customers in the wake of reports that Oracle will shut down development of existing Virtual Iron products, opting instead to absorb those products into its own Oracle VM virtualization platform.

The virtualization technologies from both Oracle and Virtual Iron are based on Xen, the open-source hypervisor.

Oracle reportedly sent a letter in June to Virtual Iron partners saying that it not only is ending development of the company’s virtualization products, but is also stopping the delivery of orders to new customers.Resource Library:




CA enhances its support for VMware's vSphere 4 and Cisco virtual switches.

In announcing the new incentive program for Virtual Iron customers, VMware officials emphasized their company’s breadth of virtualization products and stable road map as enticements to move away from Oracle.

Charles King, an analyst with Pund-IT Research, also said that VMware has is previous successes to fall back on, including all the global enterprises that now run VMware virtualization technology. That could be a big selling point as Virtual Iron customers decide in which direction to head, particularly in a case like this, where the acquiring company has decided to end development and support of the products bought in the deal.

"If migrating to a new platform is required, why not consider an entirely new vendor as well?" King said in a report issued July 8. "That ... is precisely what VMware has in mind for Virtual Iron's cleintele."

The program—which includes price discounts—covers those Virtual Iron customers with current license and support contracts. The VMware products included in the program are VMware vSphere 4 Advanced Edition, VMware vSphere 4 Enterprise Plus Edition, VMware vCenter Server Foundation and VMware vCenter Server Standard.

The Virtual Iron customers also are eligible for discounts on support and subscription on those products. To take advantage of the program—which runs through Sept. 30—Virtual Iron customers need to show proof of a current VI license and support contract.

The way Oracle is handling the Virtual Iron acquisition could have a ripple effect on the company in several areas moving forward, according to King. Virtual Iron had anywhere from 2,000 to 3,000 customers, and it's no sure bet that they will stay with Oracle and its virtualization technology, he said.

In addition, the seemingly heavy-handed way Oracle with dealing with this acquisition is reminiscent of its PeopleSoft acquisition, King said. Overall, some comapnies like IBM and EMC have done a better job handling the post-acquisition tasks than Oracle, he said.

It also could give some pause to Sun Microsystems customers. Oracle is expected to complete its acquisition of Sun this summer -- Sun investors are slated to vote on the $7.4 billion deal July 16 -- and while Oracle CEO Larry Ellison has said that Sun's products will be well taken care of by Oracle, the company's handling of Virtual Iron will be closely watched, but rivals as well as customers.

"It would not be surprising if Sun customers monitor the Virtual Iron situation closely, or if Sun's competitors seek to turn any Oracle missteps into commercial opportunities for themselves," King wrote.

Analysis: EMC or NetApp Will Pay Way Too Much for Data Domain

Analysis: EMC or NetApp Will Pay Way Too Much for Data Domain

UPDATED: Industry analysts who read/research/blog/report on these things tell eWEEK they are generally in agreement: The corporate battle between NetApp, the original mover in the Data Domain sweepstakes, and outsider EMC has gotten way, way out of hand. Whichever company eventually makes the purchase will be paying far too much for what amounts to one point product.

Is EMC allowing its considerable corporate ego to gain control in a non-solicited campaign to acquire Data Domain? And why does it want to add a feature—deduplication—that it already has in spades in its voluminous catalog?

Industry analysts who read/research/blog/report on these things tell eWEEK they are generally in agreement: The corporate battle between NetApp, the original mover in the Data Domain sweepstakes, and outsider EMC has gotten way, way out of hand. Whichever company eventually makes the purchase will be paying far too much for what amounts to one point product.

However, most people agree that Data Domain has an excellent brand of deduplication—dedupe, as it is commonly called. Now both competitors want that golden software inside their walls to sell to the midsize- and small-business market, which has a considerable upside.

Data deduplication eliminates redundant data from a disk storage device in order to lower storage space requirements, which in turn lowers data center power and cooling costs and lessens the amount of carbon dioxide produced to generate power to run the hardware.
Resource Library:



EMC already has three brands of deduplication at its disposal: Avamar, which it acquired in 2006 for $165 million, and licensing agreements with Quantum and FalconStor -- the latter for its virtual tape library, which contains dedupe software. All are highly respected brands. The prevailing thought is that EMC covets Data Domain's brand and also doesn't want NetApp to own it. Some analysts believe that EMC eventually may have to make some hard decisions on exactly how many versions of dedupe it really needs. Nothing yet has been said on the record about that, however.

Data Domain shareholders are in a rather cozy situation. Let's see, which offer might they accept—$1.9 billion cash and stocks from NetApp, or the latest counter-offer: $2.1 billion cash (an 11 percent premium) on July 6 from big, bad EMC?

Two hundred million dollars more to fill shareholders' bank accounts carries a bit of weight in anybody's business.

NetApp and Data Domain employees and board members, mostly Californians, have gone on record to say that they prefer each other as colleagues, largely because the stiffer culture of Boston-based EMC would not be quite as pleasant a workplace experience. It is also generally agreed that the product lines of NetApp and Data Domain dovetail better than Data Domain's with EMC.

Meanwhile, Data Domain shareholders are getting giddy about their investment. The stock was selling at $12.62 on April 7, 2009; it closed today at almost three times that at $34.

After the news of EMC's second offer broke July 6, NetApp CEO Dan Warmenhoven didn't have a comment other than to say that he and company board members are reviewing their options at this time.

EMC President, CEO and Chairman Joe Tucci told Data Domain Chairman Aneel Bhusri via a letter dated July 6 that not only is EMC willing to increase its offer by about $200 million, but that it also can close the deal within two weeks—far earlier than NetApp can—and remove any deal protection provisions that could slow the process. The Federal Trade Commission already has blessed the EMC proposal as being acceptable as far as antitrust issues are concerned.

Is Data Domain Being Overvalued?

Analysts contacted by eWEEK were in accord on the most important aspects of the deal: Either EMC or NetApp will pay far too much for Data Domain; NetApp and Data Domain are a better corporate fit; and NetApp would benefit far more from incorporating Data Domain than EMC would.

"I think NetApp's goose is almost cooked," storage analyst Dave Vellante of Wikibon told eWEEK. "It's pretty clear EMC is paranoid about NetApp getting Data Domain, so it will outbid NetApp perpetually, it seems—unless this is the ultimate poker hand to drive the price higher and then walk, which I don't think is EMC's intent."

Vellante said he's been thinking through possible "white-knight scenarios for NetApp, but I don't see it. NetApp's only hope, in my opinion, is to match EMC's offer and hope EMC bails [out of the deal] because the price it too high. I don't expect that to happen, but you never know what [Joe] Tucci's really thinking.

"All of this is insanity, in my view. Spending $2 billion-plus for a point product company with $300 million in revenue in a market that is perhaps $5 billion to $6 billion doesn't make sense, in my opinion."

Vellante said he'd rather see EMC invest in the information management space (e-mail archiving, e-discovery, records retention and others). "That's a high-value growth market with $10 billion-plus [market] potential and no clear winners. EMC currently has a subpar product offering with an outdated go-to-market strategy," he said.

Rob Stevenson of InfoPro has been talking to customers about this deal.

"For customers, the preferences are clear," Stevenson told eWEEK. "When we asked NetApp shops what acquisitions/partnerships would help them—over 300 large firms interviewed—they said NetApp and Data Domain are synergistic, and a combination of their products would simplify their operations.

"When EMC shops are asked the same question, end users mention NetApp and Compellent as top acquired preferences; Data Domain is not cited," Stevenson said.
Resource Library:



Joe Martins, storage analyst at Data Mobility Group, told eWEEK it is a no-brainer as to which offer Data Domain shareholders will accept.

"From a purely financial perspective, DD stakeholders would be fools not accept EMC's ridiculously high counter-offer," Martins said. "Without a doubt in my mind, NetApp's original offer was already based on an inflated, distorted view of Data Domain's contribution to the big picture.

"My advice to shareholders: Accept EMC's offer and laugh all the way to the bank. As for NetApp, it should walk away and be grateful EMC saved it from paying too much for too little."

Martins said that his message to NetApp is straightforward.

"Licensing dedupe is the way to go. It's important in the same way that good fuel management enables a more fuel-efficient engine, but it's just one small piece of a much larger infrastructure engine," Martins said. "Remain focused and continue to dominate in your current markets—there's plenty of growth to be found there. I suspect you may find yourself the target of an acquisition in the next two to five years."

Arguably, for the first time in its history, Data Domain will have the mandate to integrate its IP into a broad spectrum of storage products, Martins said.

"At EMC, Data Domain will have to prove its value in a hostile environment. Throw the dedupe IP into EMC's gladiator pit, and let the customers decide who survives. That's a good thing for all of us," Martins said.

Brian Babineau of Enterprise Strategy Group said he thought the "interesting dynamic is EMC's interest in Data Domain during the transaction. It appears they have gotten more serious about the deal as time progresses, whereas in the beginning it looked like they were simply going to make it more expensive for NetApp.

"We shall see if NetApp counters one more time."

Editor's note: This story has been updated to clarify EMC's relationship with FalconStor for its VTL.