Veeam Software, award-winning provider of systems management tools for VMware virtual datacenter environments, today announced version 4.0 of Veeam Backup & Replication, the #1 backup solution for VMware environments. With version 4.0, Veeam will be the first to support new VMware vSphere 4 vStorage technology. With this new release, Veeam extends its leadership in the VMware backup market with the most innovative features and the most customer value. Backup & Replication 4.0 will be demonstrated in Veeam’s booth, #1202, at VMworld 2009 in San Francisco next week.
Only Veeam Backup & Replication offers full native support for new VMware vSphere functionality, and version 4.0 takes full advantage of the VMware vStorage API. This includes:
Support for thin-provisioned disks, which enables faster full backups and restore of virtual machines
The ability to leverage ESX4 changed block tracking for much faster incremental backups
Support for virtual applications (vApp), resulting in more flexibility when setting up backup jobs
The vStorage API is a VMware Consolidated Backup (VCB) replacement that enables local area network-free backups directly from storage area network (SAN) storage, without affecting an organization’s production ESX or ESXi hosts.
“With VCB being phased out, the vStorage API is the recommended API for VMware vSphere backup,” explained Ratmir Timashev, Veeam President and CEO. “Native support for the vStorage API makes Veeam Backup & Replication the most advanced and future-proof solution available on the market, extending its technology leadership with innovative features and functionality requested by our customers.”
“Not only is Veeam Backup & Replication easy to deploy and manage, but it is also cost-effective,” commented Veeam customer Paul Redpath, Technical Director, Catalyst2 Services Ltd. “We’re getting more for our money as we grow our infrastructure because with Veeam, we’re experiencing 30 to 40 percent data compression during backups. It adds up to real storage cost savings.”
Veeam Backup & Replication 4.0 also includes a new Enterprise Management Server that enables enterprise customers to manage multiple installations of Veeam Backup & Replication through a single web console. This allows customers to centralize backup and decentralize restore processes according to their administrative, business, geographical, and security requirements and boundaries. Native support for this distributed architecture offers the ability to easily scale VMware backup infrastructure as the virtual environment grows, while also providing centralized management and reporting capabilities.
Additional new features in Veeam Backup & Replication 4.0 include:
Near real-time replication leveraging new vSphere ESX4 functionality to provide five-minute increments to achieve better recovery point objectives (RPOs)
Hot VM copy capability to mirror production environments to test lab storage, for datacenter migrations or for ad-hoc backups
Backup storage space monitoring with alerts for advanced backup storage capacity planning
Replica seeding for the initial replication using removable storage to minimize traffic over WAN
And much more
The full list of new features is described in a three-page document “What’s New in Veeam Backup & Replication 4.0” available at www.veeam.com/go/backup40.
Pricing and Availability
Veeam Backup & Replication 4.0 is expected to be generally available in early October. North American pricing for the new version starts at $599 per socket, but the previous version’s price of $499 per socket will be honored on orders placed by Dec. 31, 2009. More information, including a product video, is available at www.veeam.com/go/backup40.
Thursday, August 27, 2009
Sybase Works With Symantec and VMware to Strengthen Data Infrastructure for Grid and Cloud Computing Environments With Latest Release of Adaptive Serv
Sybase, Inc. (NYSE:SY), an industry leader in delivering enterprise and mobile software, today announced the newest release of Adaptive Server® Enterprise (ASE) Cluster Edition, its enterprise data management solution that reduces the complexity of deploying a database application across a shared disk server cluster environment.
Sybase has worked with its early customers and partners to help enterprises realize the benefits of grid and virtualized deployment in its latest version of ASE Cluster Edition. By providing dynamic resource management for a virtualized database environment, ASE Cluster Edition allows enterprises to meet customer service level agreements (SLAs) of their critical databases for availability while also reducing infrastructure costs through optimal resource utilization.
“Historically, deploying clustered systems on physical hardware has been complex and costly to test, develop, deploy and manage,” said Parag Patel, vice president, alliances at VMware. “Now, with Sybase Adaptive Server Enterprise Cluster Edition running on the industry-leading VMware platform, customers have further proof that they can virtualize their mission-critical workloads, simplify their management, and achieve levels of availability and continuity not possible on physical systems.”
“Together, Symantec and Sybase are delivering a truly integrated database and storage clustering solution to joint customers,” said Josh Kahn, vice president of product management, Storage and Availability Management Group at Symantec. “The combination of Sybase’s ASE Cluster Edition database and Veritas Storage Foundation for Sybase® ASE CE empowers customers with significant improvements in performance, availability and data management.”
ASE Cluster Edition’s newest enhancements provide key enabling technologies to ease manageability and improve availability in grid and cloud computing environments. Sybase has developed the latest version of ASE Cluster Edition to include:
Expanded partner ecosystem
Integration with Veritas Storage Foundation™ and Veritas™ Cluster Server – increasing the manageability, performance and availability of shared storage environments
Participation in the VMware® vCloud initiative – providing support for cloud computing environments
Extended Availability
Disaster recovery site support – allowing cluster support at remote sites if a primary site goes down
Ease of Manageability
Local installation - each node can be installed independently, improving flexibility and increasing availability of other nodes should one go down
Foundation for rolling upgrades – enabling independent maintenance of each node
“Because of its ability to provide agile deployment of physical resources, including servers and storage, and to deploy databases in a highly virtualized way, defining virtual servers and virtual clusters within the managed set of physical resources, Sybase ASE Cluster Edition provides features that enable the kind of flexibility and scalability necessary to deploy a database in the cloud. Also, because grid computing requires great flexibility and places highly variable workload demands on databases, features such as those in Sybase ASE Cluster Edition would seem essential to deploying database applications in a grid-based fashion,” said Carl Olofson, research vice president for Information Management and Data Integration, IDC1.
“The latest release of ASE Cluster Edition incorporates the feedback we've received from our customers, who run some of the world's most critical data in areas such as financial services, telecommunications and government,” said Brian Vink, vice president Database Products at Sybase. “We continue to work closely with our customers and partners to deliver innovative database technologies like ASE Cluster Edition that offer superior availability, resource optimization and low total cost of ownership.”
Availability
Sybase ASE Cluster Edition is currently available. Please visit http://www.sybase.com/clusters for more information.
Sybase has worked with its early customers and partners to help enterprises realize the benefits of grid and virtualized deployment in its latest version of ASE Cluster Edition. By providing dynamic resource management for a virtualized database environment, ASE Cluster Edition allows enterprises to meet customer service level agreements (SLAs) of their critical databases for availability while also reducing infrastructure costs through optimal resource utilization.
“Historically, deploying clustered systems on physical hardware has been complex and costly to test, develop, deploy and manage,” said Parag Patel, vice president, alliances at VMware. “Now, with Sybase Adaptive Server Enterprise Cluster Edition running on the industry-leading VMware platform, customers have further proof that they can virtualize their mission-critical workloads, simplify their management, and achieve levels of availability and continuity not possible on physical systems.”
“Together, Symantec and Sybase are delivering a truly integrated database and storage clustering solution to joint customers,” said Josh Kahn, vice president of product management, Storage and Availability Management Group at Symantec. “The combination of Sybase’s ASE Cluster Edition database and Veritas Storage Foundation for Sybase® ASE CE empowers customers with significant improvements in performance, availability and data management.”
ASE Cluster Edition’s newest enhancements provide key enabling technologies to ease manageability and improve availability in grid and cloud computing environments. Sybase has developed the latest version of ASE Cluster Edition to include:
Expanded partner ecosystem
Integration with Veritas Storage Foundation™ and Veritas™ Cluster Server – increasing the manageability, performance and availability of shared storage environments
Participation in the VMware® vCloud initiative – providing support for cloud computing environments
Extended Availability
Disaster recovery site support – allowing cluster support at remote sites if a primary site goes down
Ease of Manageability
Local installation - each node can be installed independently, improving flexibility and increasing availability of other nodes should one go down
Foundation for rolling upgrades – enabling independent maintenance of each node
“Because of its ability to provide agile deployment of physical resources, including servers and storage, and to deploy databases in a highly virtualized way, defining virtual servers and virtual clusters within the managed set of physical resources, Sybase ASE Cluster Edition provides features that enable the kind of flexibility and scalability necessary to deploy a database in the cloud. Also, because grid computing requires great flexibility and places highly variable workload demands on databases, features such as those in Sybase ASE Cluster Edition would seem essential to deploying database applications in a grid-based fashion,” said Carl Olofson, research vice president for Information Management and Data Integration, IDC1.
“The latest release of ASE Cluster Edition incorporates the feedback we've received from our customers, who run some of the world's most critical data in areas such as financial services, telecommunications and government,” said Brian Vink, vice president Database Products at Sybase. “We continue to work closely with our customers and partners to deliver innovative database technologies like ASE Cluster Edition that offer superior availability, resource optimization and low total cost of ownership.”
Availability
Sybase ASE Cluster Edition is currently available. Please visit http://www.sybase.com/clusters for more information.
VMware Announces More Than 21,000 New Customers in the First Half of 2009 and Strong Customer Traction With VMware vSphere
VMware, Inc., the global leader in virtualization solutions from the desktop through the datacenter and to the cloud, announced strong customer traction for VMware's industry-leading virtualization platform. In the first half of 2009, more than 21,000 new customers have purchased VMware solutions -- equivalent to an average of 121 new customers per day. In addition, VMware vSphere™ 4 has reached more than 350,000 downloads in the first 12 weeks of general availability -- at an average rate of 140 downloads per hour. According to a recent poll on http://www.vmware.com/, approximately 75 percent of customers that responded are upgrading or plan to upgrade to VMware vSphere™ 4 within the next six months.
VMware vSphere™ 4 delivers customers the following critical benefits: offers unmatched cost savings; delivers the efficiency and performance required to run business critical applications; provides uncompromised control over application service levels, and preserves customer choice of hardware, OS, application architecture and on-premise vs. off-premise application hosting.
Unmatched Cost Savings Even Compared to So-Called "Free" Offerings
VMware vSphere™ 4 helps customers reduce capital expenses by up to 60 percent and operational expenses by an average of 33 percent. By allowing companies to make more efficient use of today's powerful servers, VMware vSphere™ 4 also enables unmatched cost savings even when compared to so-called "free" offerings.
"As a result of upgrading to VMware vSphere 4, the museum has saved $200,000 AUD on hardware procurement costs since migrating from VMware Infrastructure 3. We've also reduced our power requirements by 33 percent and have achieved a server consolidation ratio of 12:1," said Dan Collins, manager of information technology at Powerhouse Museum. "VMware vSphere 4 has also dramatically improved our infrastructure responsiveness and flexibility, and most importantly enhanced our recoverability of systems and information."
Boosted Performance and Improved Service Levels for Business Critical Applications
With VMware vSphere 4, customers are extending the benefits of virtualization to business critical applications such e-mail, database, ERP, CRM systems and others. Customers are reporting significant increases in application performance, reliability and scalability after deploying VMware vSphere™ 4.
"After seeing the benefits of virtualizing our infrastructure applications, we wanted to move our SQL database into the virtualized environment," said Roy K. Turner, server systems engineer, Frederick Memorial Hospital. "The improved performance and enhanced reliability in VMware vSphere 4 have been invaluable in exceeding our SLAs and preventing revenue loss from our mission-critical applications. VMware Fault Tolerance further improves uptime for our most critical applications by providing zero-downtime recovery from hardware failures, while VMware Data Recovery helps us easily back up and protect our critical data."
"With VMware, we've found that we can roll out new services much faster, as well as increase the reliability of existing services, while cutting the costs of doing both," said Bob Plankers, technical architect, University of Wisconsin - Madison. "With VMware vSphere 4, our infrastructure management becomes much simpler through the use of new VMware vNetwork Distributed Switch and Host Profiles. VMware vSphere 4 also increased the amount of I/O, memory, and CPU available, meaning we can virtualize nearly every workload we have."
VMware vSphere™ 4 delivers customers the following critical benefits: offers unmatched cost savings; delivers the efficiency and performance required to run business critical applications; provides uncompromised control over application service levels, and preserves customer choice of hardware, OS, application architecture and on-premise vs. off-premise application hosting.
Unmatched Cost Savings Even Compared to So-Called "Free" Offerings
VMware vSphere™ 4 helps customers reduce capital expenses by up to 60 percent and operational expenses by an average of 33 percent. By allowing companies to make more efficient use of today's powerful servers, VMware vSphere™ 4 also enables unmatched cost savings even when compared to so-called "free" offerings.
"As a result of upgrading to VMware vSphere 4, the museum has saved $200,000 AUD on hardware procurement costs since migrating from VMware Infrastructure 3. We've also reduced our power requirements by 33 percent and have achieved a server consolidation ratio of 12:1," said Dan Collins, manager of information technology at Powerhouse Museum. "VMware vSphere 4 has also dramatically improved our infrastructure responsiveness and flexibility, and most importantly enhanced our recoverability of systems and information."
Boosted Performance and Improved Service Levels for Business Critical Applications
With VMware vSphere 4, customers are extending the benefits of virtualization to business critical applications such e-mail, database, ERP, CRM systems and others. Customers are reporting significant increases in application performance, reliability and scalability after deploying VMware vSphere™ 4.
"After seeing the benefits of virtualizing our infrastructure applications, we wanted to move our SQL database into the virtualized environment," said Roy K. Turner, server systems engineer, Frederick Memorial Hospital. "The improved performance and enhanced reliability in VMware vSphere 4 have been invaluable in exceeding our SLAs and preventing revenue loss from our mission-critical applications. VMware Fault Tolerance further improves uptime for our most critical applications by providing zero-downtime recovery from hardware failures, while VMware Data Recovery helps us easily back up and protect our critical data."
"With VMware, we've found that we can roll out new services much faster, as well as increase the reliability of existing services, while cutting the costs of doing both," said Bob Plankers, technical architect, University of Wisconsin - Madison. "With VMware vSphere 4, our infrastructure management becomes much simpler through the use of new VMware vNetwork Distributed Switch and Host Profiles. VMware vSphere 4 also increased the amount of I/O, memory, and CPU available, meaning we can virtualize nearly every workload we have."
Wyse Technology Improves Virtual Desktop Environments With New Flash Acceleration Technology
Wyse Technology, the global leader in thin computing and client virtualization, today officially announced its new acceleration solution for Adobe Flash, as part of its incredibly popular TCX virtualization software suite. The new functionality improves the end user experience on virtual desktops by solving the Flash content quality challenge for VDI and Terminal Services environments.
"Every end user wants the performance of their thin client to be as good as or better than their PC," according to Mark Bowker, Analyst, Enterprise Strategy Group. "Wyse has been steadily deploying software as part of its TCX suite toward that end and the addition of Flash acceleration capabilities will help accelerate virtual desktop adoption."
With Flash applications abundant in all industry verticals, especially financial services and education, customers are thrilled that the content acceleration challenge has been solved without compromising the end user experience.
Offered as part of the Wyse TCX virtualization software suite, the new Flash acceleration extends the capabilities of the Microsoft RDP and Citrix ICA/HDX protocols for Flash Player 9 and 10 and Internet Explorer 6 and 7. Compatibility with VMware View and Citrix XenDesktop connection brokers, using Windows XP Pro, Vista or Windows 7 completes the solution.
"Wyse continues to innovate and stretch the capabilities of their thin clients to provide a rich virtual desktop experience," said Sumit Dhawan, vice president, product marketing, XenDesktop product group at Citrix Systems. "Wyse technologies perfectly complement and extend Citrix HDX technologies to deliver an excellent, high-definition user experience and expand the ability of IT to offer virtual desktops to a wide variety of users."
With Flash acceleration, end users' animation, online training, YouTube, and video-rich Web sites are now seamlessly presented.
"Thin client customers using sites like CNN.com and NYSE MarkeTRAC with Flash-based tickers, are significantly improved by Wyse's new Flash content acceleration capabilities," says Param Desai, Product Manager at Wyse Technology. "Flash acceleration continues our efforts to make the thin client user experience even better than a PC."
Pricing and Availability
Flash acceleration will be commercially available in October 2009, available on Wyse's V class and R class thin clients with Windows XP Embedded, Windows Embedded Standard 2009, or Wyse ThinOS operating systems, and supported PCs. For more information on Flash acceleration and Wyse TCX, please visit http://www.wyse.com/products/software/tcx/
"Every end user wants the performance of their thin client to be as good as or better than their PC," according to Mark Bowker, Analyst, Enterprise Strategy Group. "Wyse has been steadily deploying software as part of its TCX suite toward that end and the addition of Flash acceleration capabilities will help accelerate virtual desktop adoption."
With Flash applications abundant in all industry verticals, especially financial services and education, customers are thrilled that the content acceleration challenge has been solved without compromising the end user experience.
Offered as part of the Wyse TCX virtualization software suite, the new Flash acceleration extends the capabilities of the Microsoft RDP and Citrix ICA/HDX protocols for Flash Player 9 and 10 and Internet Explorer 6 and 7. Compatibility with VMware View and Citrix XenDesktop connection brokers, using Windows XP Pro, Vista or Windows 7 completes the solution.
"Wyse continues to innovate and stretch the capabilities of their thin clients to provide a rich virtual desktop experience," said Sumit Dhawan, vice president, product marketing, XenDesktop product group at Citrix Systems. "Wyse technologies perfectly complement and extend Citrix HDX technologies to deliver an excellent, high-definition user experience and expand the ability of IT to offer virtual desktops to a wide variety of users."
With Flash acceleration, end users' animation, online training, YouTube, and video-rich Web sites are now seamlessly presented.
"Thin client customers using sites like CNN.com and NYSE MarkeTRAC with Flash-based tickers, are significantly improved by Wyse's new Flash content acceleration capabilities," says Param Desai, Product Manager at Wyse Technology. "Flash acceleration continues our efforts to make the thin client user experience even better than a PC."
Pricing and Availability
Flash acceleration will be commercially available in October 2009, available on Wyse's V class and R class thin clients with Windows XP Embedded, Windows Embedded Standard 2009, or Wyse ThinOS operating systems, and supported PCs. For more information on Flash acceleration and Wyse TCX, please visit http://www.wyse.com/products/software/tcx/
Leostream Releases Connection Broker 6.2
update to its VDI connection broker. The company focused on adding support for Citrix in the last major update, and now, with this release, they've focused on support for Microsoft technologies.
Features of the Leostream Connection Broker 6.2 include:
Installation & Management
Easy installation as a virtual application: Connection Broker 6.2 virtual application provides native installation on a Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2 hypervisor;
Native support for Hyper-v based virtual machines: Simplified discovery and machine power control.
End-user Experience Pack - Connection Broker provides a complete set of features to create an optimal end-user experience with Microsoft desktop virtualization software, including:
Windows 7 support: Full support for Microsoft’s new Windows 7 operating system;
RDP 7 support: Full support for the new remote desktop protocol (RDP) 7, with its high-performance enhancements such as bi-directional audio and rich graphics;
Multimonitor support: Leostream supports multiple monitors with RDP and a wide range of additional protocols;
USB management: USB pass-through policies allow administrators to manage classes of devices or individual devices, depending on need. USB policies can be combined with other Leostream policies, such as location-based ones, to support the exact implementation of business rules;
Location-based printing: Administrators can specify a list of network printers to connect to a particular group of clients based on their location. End-users can select local printers when connected to remote desktops;
Single Sign On for RDP: Provides seamless access to all versions of Windows virtual desktops from any client device, including Windows 2000, 2003, XP, Vista, and RDP 7;
User profile support: Consistently offers the same desktop to the user who travels or changes physical location;
Extensive flexibility in assigning users to resources such as desktops, applications and sessions: Leostream’s powerful policy capabilities are natively available in Microsoft environments.
Features of the Leostream Connection Broker 6.2 include:
Installation & Management
Easy installation as a virtual application: Connection Broker 6.2 virtual application provides native installation on a Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2 hypervisor;
Native support for Hyper-v based virtual machines: Simplified discovery and machine power control.
End-user Experience Pack - Connection Broker provides a complete set of features to create an optimal end-user experience with Microsoft desktop virtualization software, including:
Windows 7 support: Full support for Microsoft’s new Windows 7 operating system;
RDP 7 support: Full support for the new remote desktop protocol (RDP) 7, with its high-performance enhancements such as bi-directional audio and rich graphics;
Multimonitor support: Leostream supports multiple monitors with RDP and a wide range of additional protocols;
USB management: USB pass-through policies allow administrators to manage classes of devices or individual devices, depending on need. USB policies can be combined with other Leostream policies, such as location-based ones, to support the exact implementation of business rules;
Location-based printing: Administrators can specify a list of network printers to connect to a particular group of clients based on their location. End-users can select local printers when connected to remote desktops;
Single Sign On for RDP: Provides seamless access to all versions of Windows virtual desktops from any client device, including Windows 2000, 2003, XP, Vista, and RDP 7;
User profile support: Consistently offers the same desktop to the user who travels or changes physical location;
Extensive flexibility in assigning users to resources such as desktops, applications and sessions: Leostream’s powerful policy capabilities are natively available in Microsoft environments.
VMLogix Expands Support For Heterogeneous Virtualization Infrastructure in Newest Release
VMLogix, Inc., a provider of virtual machine management solutions designed for software companies and IT organizations, today announced the newest version of their flagship product, VMLogix LabManager 3.8. The new release adds capabilities for network policy configurations and deployment within IP zones across multiple virtual hosts. This release also adds support for VMware vSphere 4 virtualization infrastructure and integration with VMware vCenter Server, formally VMware VirtualCenter, as well as extended support for Microsoft Hyper-V Server 2008 R2.
"Products such as VMLogix LabManager 3.8 enable customers to leverage VMware's management console capabilities such as vMotion, which is a value add to users of vSphere and vCenter," said Theresa Lanowitz, founder and CEO of voke, inc. "As the virtualization market continues to mature and expand, an increasing number of organizations are using hybrid hypervisor environments. LabManager's ability to deploy virtual labs across multiple virtualization platforms offers choice while still ensuring centralized management functionality."
VMLogix LabManager allows development, test and support teams to build, snapshot, share and deploy production-like environments on-demand across virtualization platforms. With LabManager, customers can consolidate and automate lab IT infrastructure in order to deliver and maintain software applications more quickly, cost-effectively and reliably. Companies can dramatically reduce the manual effort, time and IT resources required to develop and maintain higher quality software applications by using comprehensive automation capabilities, advanced team management and seamless integrations with leading solutions from HP and IBM.
"VMLogix continues to develop and innovate with functionality for our virtual lab management products. We also continue to make it our priority to provide extensive support for leading virtualization platforms from VMware and Microsoft," said Sameer Dholakia, CEO of VMLogix. "Our newest release offers capabilities that make it even easier for organizations to integrate their virtual labs into their infrastructure as well as manage virtual machine instances within their deployment."
New features of VMLogix LabManager 3.8 include:
Support for VMware vCenter Server 4.0 and VMware vSphere: integrating with VMware vCenter Server 4.0, LabManager supports a lab with VMware vSphere or VMware Virtual Infrastructure 3 hypervisors on the virtual hosts. The support for vCenter Server in LabManager allows administrators to benefit from platform management capabilities such as vMotion, DRS, HA and resource pools.
Updated support for Microsoft Hyper-V (Windows Server 2008 R2): extending platform support functionality, taking advantage of Hyper-V R2's new shared cluster capability.
Network policies in configurations: allowing users to set custom firewall rules on the soft router in a configuration that allows virtual machines from within a LabManager configuration to connect to external IP addresses, outbound and inbound.
IP zones across multiple hosts: enabling IP-zoned LabManager configurations to be deployed across multiple virtual hosts.
Availability
VMLogix LabManager 3.8 is now generally available. For more information, visit http://www.vmlogix.com/Products/VMLogix-LabManager/.
"Products such as VMLogix LabManager 3.8 enable customers to leverage VMware's management console capabilities such as vMotion, which is a value add to users of vSphere and vCenter," said Theresa Lanowitz, founder and CEO of voke, inc. "As the virtualization market continues to mature and expand, an increasing number of organizations are using hybrid hypervisor environments. LabManager's ability to deploy virtual labs across multiple virtualization platforms offers choice while still ensuring centralized management functionality."
VMLogix LabManager allows development, test and support teams to build, snapshot, share and deploy production-like environments on-demand across virtualization platforms. With LabManager, customers can consolidate and automate lab IT infrastructure in order to deliver and maintain software applications more quickly, cost-effectively and reliably. Companies can dramatically reduce the manual effort, time and IT resources required to develop and maintain higher quality software applications by using comprehensive automation capabilities, advanced team management and seamless integrations with leading solutions from HP and IBM.
"VMLogix continues to develop and innovate with functionality for our virtual lab management products. We also continue to make it our priority to provide extensive support for leading virtualization platforms from VMware and Microsoft," said Sameer Dholakia, CEO of VMLogix. "Our newest release offers capabilities that make it even easier for organizations to integrate their virtual labs into their infrastructure as well as manage virtual machine instances within their deployment."
New features of VMLogix LabManager 3.8 include:
Support for VMware vCenter Server 4.0 and VMware vSphere: integrating with VMware vCenter Server 4.0, LabManager supports a lab with VMware vSphere or VMware Virtual Infrastructure 3 hypervisors on the virtual hosts. The support for vCenter Server in LabManager allows administrators to benefit from platform management capabilities such as vMotion, DRS, HA and resource pools.
Updated support for Microsoft Hyper-V (Windows Server 2008 R2): extending platform support functionality, taking advantage of Hyper-V R2's new shared cluster capability.
Network policies in configurations: allowing users to set custom firewall rules on the soft router in a configuration that allows virtual machines from within a LabManager configuration to connect to external IP addresses, outbound and inbound.
IP zones across multiple hosts: enabling IP-zoned LabManager configurations to be deployed across multiple virtual hosts.
Availability
VMLogix LabManager 3.8 is now generally available. For more information, visit http://www.vmlogix.com/Products/VMLogix-LabManager/.
Embotics Eases Management of Virtual Environments
Embotics is rolling out Version 3.0 of its V-Commander offering for the automation and management of virtual environments, with the idea of driving down operational costs and increasing automation of the infrastructure. V-Commander also comes in three modules, enabling enterprises to pick and choose which features to buy when they need them.
Embotics is looking to reduce the operational costs associated with server virtualization in the data center.
The company Aug. 25 rolled out V-Commander 3.0, which is aimed at increasing automation and management in virtualized environments. Resource Library:
V-Commander 3.0 gives IT professionals a deep look into their environments, enabling them to get a historical look at events in their virtualized environments and offering a host of reporting capabilities. In addition, V-Commander 3.0 can establish and enforce policies, suspend virtual machines that don’t comply with policies, assign policy attributes at various levels throughout the virtual infrastructure and alert IT managers via e-mail.
The enhanced software also includes better role-based access control, support for mixed VMware environments and better compatibility with VMware’s VirtualCenter.
Embotics is offering V-Commander in three modules, enabling businesses to pick and choose what they need, giving them greater control over their virtual infrastructure deployments and an easier way to pay for them.
The modules include Federated Inventory Management, a real-time inventory and reporting system; Resource and Cost Management: Automated, which offers resource management and cost containment features, improving accountability, reducing administrative time and optimizing resource utilization; and the Operational and Risk Management module, which offers process automation and control, offering a more consistent environment and improved oversight.
Embotics is looking to reduce the operational costs associated with server virtualization in the data center.
The company Aug. 25 rolled out V-Commander 3.0, which is aimed at increasing automation and management in virtualized environments. Resource Library:
V-Commander 3.0 gives IT professionals a deep look into their environments, enabling them to get a historical look at events in their virtualized environments and offering a host of reporting capabilities. In addition, V-Commander 3.0 can establish and enforce policies, suspend virtual machines that don’t comply with policies, assign policy attributes at various levels throughout the virtual infrastructure and alert IT managers via e-mail.
The enhanced software also includes better role-based access control, support for mixed VMware environments and better compatibility with VMware’s VirtualCenter.
Embotics is offering V-Commander in three modules, enabling businesses to pick and choose what they need, giving them greater control over their virtual infrastructure deployments and an easier way to pay for them.
The modules include Federated Inventory Management, a real-time inventory and reporting system; Resource and Cost Management: Automated, which offers resource management and cost containment features, improving accountability, reducing administrative time and optimizing resource utilization; and the Operational and Risk Management module, which offers process automation and control, offering a more consistent environment and improved oversight.
How to Implement Green Data Centers with IT Virtualization
The use of virtualization technology is usually the first and most important step companies can take to create energy-efficient and green data centers. Virtualization is the most promising technology to address both the issues of IT resource utilization and facilities space, power and cooling utilization. IT virtualization, along with cloud computing, is the key to energy-efficienct, flexible and green data centers. Here, Knowledge Center contributor John Lamb describes the concept of IT virtualization and indicates the significant impact that IT virtualization has on improving data center energy efficiency.
The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future—via private cloud computing—for most of our data centers.
Although the use of cloud computing in your company's data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.
Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.Resource Library:
Reasons for creating virtual servers
Consider this basic scenario: You're in charge of procuring additional server capacity at your company's data center. You have two identical servers, each running different Windows applications for your company. The first server—let's call it "Server A"—is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server—let's call it "Server B"—is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).
So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.
In newer virtual server technologies—for example, Unix Logical Partitions (LPARs) with micro-partitioning—each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server’s immediate need.
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants.Resource Library:
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
The most significant step most organizations can make in moving to green data centers is to implement virtualization for their IT data center devices. The IT devices include servers, data storage, and clients or desktops used to support the data center. There is also a virtual IT world of the future—via private cloud computing—for most of our data centers.
Although the use of cloud computing in your company's data center for mainstream computing may be off in the future, some steps towards private cloud computing for mainstream computing within your company are currently available. Server clusters are here now and are being used in many corporate data centers.
Although cost reduction usually drives the path to virtualization, often the most important reason to use virtualization is IT flexibility. The cost and energy savings due to consolidating hardware and software are very significant benefits and nicely complement the flexibility benefits. The use of virtualization technologies is usually the first and most important step we can take in creating energy efficient and green data centers.Resource Library:
Reasons for creating virtual servers
Consider this basic scenario: You're in charge of procuring additional server capacity at your company's data center. You have two identical servers, each running different Windows applications for your company. The first server—let's call it "Server A"—is lightly used, reaching a peak of only five percent of its CPU capacity and using only five percent of its internal hard disk. The second server—let's call it "Server B"—is using all of its CPU (averaging 95 percent CPU utilization) and has basically run out of hard disk capacity (that is, the hard disk is 95 percent full).
So, you have a real problem with Server B. However, if you consider Server A and Server B together, on average the combined servers are using only 50 percent of their CPU capacity and 50 percent of their hard disk capacity. If the two servers were actually virtual servers on a large physical server, the problem would be immediately solved since each server could be quickly allocated the resource each needs.
In newer virtual server technologies—for example, Unix Logical Partitions (LPARs) with micro-partitioning—each virtual server can dynamically (instantaneously) increase the number of CPUs available by utilizing the CPUs currently not in use by other virtual servers on the large physical machine. This idea is that each virtual server gets the resource required based on the virtual server’s immediate need.
Cloud computing: exciting future for IT virtualization
Cloud computing is a relatively new (circa late 2007) label for the subset of grid computing that includes utility computing and other approaches to the use of shared computing resources. Cloud computing is an alternative to having local servers or personal devices handling users' applications. Essentially, it is an idea that the technological capabilities should "hover" over everything and be available whenever a user wants.Resource Library:
Although the early publicity on cloud computing was for public offerings over the public Internet by companies such as Amazon and Google, private cloud computing is starting to come of age. A private cloud is a smaller, cloudlike IT system within a corporate firewall that offers shared services to a closed internal network. Consumers of such a cloud would include the employees across various divisions and departments, business partners, suppliers, resellers and other organizations.
Shared services on the infrastructure side such as computing power or data storage services (or on the application side such as a single customer information application shared across the organization) are suitable candidates for such an approach. Of course, IT virtualization would be the basis of the infrastructure design for the shared services, and this will help drive energy efficiency for our green data centers of the future.
Because a private cloud is exclusive in nature and limited in access to a set of participants, it has inherent strengths with respect to security aspects and control over data. Also, the approach can provide advantages with respect to adherence to corporate and regulatory compliance guidelines. These considerations for a private cloud are very significant for most large organizations.
Cluster architecture for virtual servers
There are now many IT vendors offering virtual servers and other virtual systems. Cluster architecture for these virtual systems provides another significant step forward in data center flexibility and provides an infrastructure for very efficient private cloud computing. By completely virtualizing servers, storage and networking, an entire running virtual machine can be moved instantaneously from one server to another.
Tuesday, August 25, 2009
Netbooks Beat Apple Macs with Student Laptop Shopper, Survey Says
More than a third of the students surveyed by Retrevo reported wanting small, lightweight notebooks this year, and more than half had a budget under $750. The result, says Retrevo, is that back-to-school shoppers are passing on Apple laptops.
Retrevo, a product search engine, says it polled more than 300 of its 4 million monthly visitors and found that the “majority of student laptop shoppers will not consider buying a Mac,” the company reported in an Aug. 18 statement.
According to Retrevo, 34 percent of students said they want laptops that are small and lightweight, while 49 percent wanted full-size PC laptops. Price was also a considerable factor.
“While Apple has done well historically in the education market, 2009 marks the dawn of the netbook,” said Retrevo CEO Vipin Jain, in a statement.
“Students told us they wanted longer battery life, smaller size and a lighter laptop. [Fifty-eight] percent of them plan on spending less than $750.00. Only 18 percent have a budget over $1,000. Netbooks are affordable; some costing only $170. In contrast, Apple laptops start at $949.” Resource Library:
Jain added: “At a time when many people are experiencing economic hardship, having a new Apple laptop isn’t a necessity.”
It’s worth noting that the students weren’t asked the types of functionalities they would need their laptops to perform. The differences between an Apple laptop and a netbook, of course, extend beyond their price points.
It’s relevant to note, too, that approximately only 7 percent of computers in the United States are Macs, according to Technology Business Research. And that’s after OS X use tripled from 25 million to 75 million users between 2007 and 2009, as Phil Schiller explained at this year’s Worldwide Developer Conference.
With Mac users representing less than 10 percent of all computer users, and 18 percent of Retrevo’s shoppers saying they have a budget over $1,000, is it entirely accurate to say that Apple laptops are being overlooked by students this year?
“Retailers are working overtime to attract students,” said Jain in the statement. “Wal-Mart expanded its laptop selection by 40 percent and partnered with Hewlett-Packard to make a $298 Compaq Presario. Best Buy introduced the Next Class laptop line. The problem this year isn’t finding deals, it’s finding the best product for your budget.”
Apple is currently offering students a free iPod Touch with the purchase of a MacBook—a deal that ends Sept. 8.
Retrevo, a product search engine, says it polled more than 300 of its 4 million monthly visitors and found that the “majority of student laptop shoppers will not consider buying a Mac,” the company reported in an Aug. 18 statement.
According to Retrevo, 34 percent of students said they want laptops that are small and lightweight, while 49 percent wanted full-size PC laptops. Price was also a considerable factor.
“While Apple has done well historically in the education market, 2009 marks the dawn of the netbook,” said Retrevo CEO Vipin Jain, in a statement.
“Students told us they wanted longer battery life, smaller size and a lighter laptop. [Fifty-eight] percent of them plan on spending less than $750.00. Only 18 percent have a budget over $1,000. Netbooks are affordable; some costing only $170. In contrast, Apple laptops start at $949.” Resource Library:
Jain added: “At a time when many people are experiencing economic hardship, having a new Apple laptop isn’t a necessity.”
It’s worth noting that the students weren’t asked the types of functionalities they would need their laptops to perform. The differences between an Apple laptop and a netbook, of course, extend beyond their price points.
It’s relevant to note, too, that approximately only 7 percent of computers in the United States are Macs, according to Technology Business Research. And that’s after OS X use tripled from 25 million to 75 million users between 2007 and 2009, as Phil Schiller explained at this year’s Worldwide Developer Conference.
With Mac users representing less than 10 percent of all computer users, and 18 percent of Retrevo’s shoppers saying they have a budget over $1,000, is it entirely accurate to say that Apple laptops are being overlooked by students this year?
“Retailers are working overtime to attract students,” said Jain in the statement. “Wal-Mart expanded its laptop selection by 40 percent and partnered with Hewlett-Packard to make a $298 Compaq Presario. Best Buy introduced the Next Class laptop line. The problem this year isn’t finding deals, it’s finding the best product for your budget.”
Apple is currently offering students a free iPod Touch with the purchase of a MacBook—a deal that ends Sept. 8.
Apple Mac OS X Snow Leopard Goes on Sale Aug. 28
Apple’s newest big cat, Snow Leopard, is now available for pre-order in Apple’s online store and will hit shelves Aug. 28. The long feature list includes mail that loads twice as fast and built-in support for Microsoft Exchange Server 2007.
Apple’s much anticipated Mac OS X v10.6, also known as Snow Leopard, is now available for pre-order at Apple’s online store and will go sale Friday, Aug. 28, Apple announced on Aug. 24.
Mac OS X Leopard users will be able to upgrade to Snow Leopard for $29. In addition, anyone who purchased a new Mac between June 8 of this year and Dec. 26 can purchase a Snow Leopard upgrade package for $9.95, which includes shipping and handling.
Resource Library:
The feature list on this new beast is long, with Apple saying that its engineers have refined 90 percent of the more than 1,000 projects that make up Mac OS X. Improvements are said to include a more responsive Finder; Mail that loads messages twice as fast as Leopard Version 10.4.8; a Dock with Exposé integration; 80 percent faster initial backup to Time Machine; a redesigned QuickTime X player that enables easier viewing, recording, trimming and sharing of video; and a 64-bit, very crash-resistant version of Safari 4 that’s additionally 50 percent faster than the 32-bit version.
Like Safari, Finder, Mail, iCal and iChat are now also 64-bit, which is said to make them quicker and more secure, though still compatible with 32-bit applications.
Additionally, Snow Leopard is half the size of the previous version, freeing up 7GB of space on users’ hard drives. It requires a minimum of 1GB of RAM and will run on any Mac with an Intel processor.
“Snow Leopard builds on our most successful operating system ever, and we’re happy to get it to users earlier than expected," said Bertrand Serlet, Apple’s senior vice president of software engineering, in a statement. “For just $29, Leopard users get a smooth upgrade to the world’s most advanced operating system and the only system with built-in Exchange support.”
The built-in support for Microsoft Exchange Server 2007 should answer any remaining questions about whether Apple is interested in enterprises.
For OS X Tiger users with an Intel-based Mac, the upgrade will be available with iLife ’09 and iWork ’09 for $169, or $229 for a Family Pack.
Apple’s new Mac OS X Server Snow Leopard will also go on sale Aug. 28, for $499 with an unlimited number of client licenses.
Apple’s much anticipated Mac OS X v10.6, also known as Snow Leopard, is now available for pre-order at Apple’s online store and will go sale Friday, Aug. 28, Apple announced on Aug. 24.
Mac OS X Leopard users will be able to upgrade to Snow Leopard for $29. In addition, anyone who purchased a new Mac between June 8 of this year and Dec. 26 can purchase a Snow Leopard upgrade package for $9.95, which includes shipping and handling.
Resource Library:
The feature list on this new beast is long, with Apple saying that its engineers have refined 90 percent of the more than 1,000 projects that make up Mac OS X. Improvements are said to include a more responsive Finder; Mail that loads messages twice as fast as Leopard Version 10.4.8; a Dock with Exposé integration; 80 percent faster initial backup to Time Machine; a redesigned QuickTime X player that enables easier viewing, recording, trimming and sharing of video; and a 64-bit, very crash-resistant version of Safari 4 that’s additionally 50 percent faster than the 32-bit version.
Like Safari, Finder, Mail, iCal and iChat are now also 64-bit, which is said to make them quicker and more secure, though still compatible with 32-bit applications.
Additionally, Snow Leopard is half the size of the previous version, freeing up 7GB of space on users’ hard drives. It requires a minimum of 1GB of RAM and will run on any Mac with an Intel processor.
“Snow Leopard builds on our most successful operating system ever, and we’re happy to get it to users earlier than expected," said Bertrand Serlet, Apple’s senior vice president of software engineering, in a statement. “For just $29, Leopard users get a smooth upgrade to the world’s most advanced operating system and the only system with built-in Exchange support.”
The built-in support for Microsoft Exchange Server 2007 should answer any remaining questions about whether Apple is interested in enterprises.
For OS X Tiger users with an Intel-based Mac, the upgrade will be available with iLife ’09 and iWork ’09 for $169, or $229 for a Family Pack.
Apple’s new Mac OS X Server Snow Leopard will also go on sale Aug. 28, for $499 with an unlimited number of client licenses.
HP Scores Air Force Contract Win
The deal includes placing a wide array of HP platforms at Air Force facilities worldwide, including the HP xw4600 Workstation, which combines next-generation performance technologies into a single processor socket workstation.
HP said Aug. 24 it landed a new Air Force contract to provide new HP Workstation and desktop PCs as part of its enterprise IT purchase program. The award is part of the Air Force's DLS (desktop, laptop and servers) Quarterly Enterprise Buy (QEB).
The QEB award will include the HP xw4600 Workstation, which combines next-generation performance technologies into a single processor socket workstation. Dual PCIe X16 Gen2 graphics interfaces provide up to four times the performance of previous graphics interfaces, along with the ability to power multiple displays without compromise.
Additionally, with an 80 PLUS efficient power supply standard and Electronic Products Environmental Assessment Tool (EPEAT) registered configurations available, the HP xw4600 is designed to optimize energy use while maintaining high-performance power.
HP will include customized security configurations that meet Air Force specifications and tests.
The Air Force Information Technology Commodity Council, which includes top Air Force officials, evaluates vendors' submissions for the QEB and their ability to deliver quality enterprise computing in the toughest of environments.
HP said Aug. 24 it landed a new Air Force contract to provide new HP Workstation and desktop PCs as part of its enterprise IT purchase program. The award is part of the Air Force's DLS (desktop, laptop and servers) Quarterly Enterprise Buy (QEB).
The QEB award will include the HP xw4600 Workstation, which combines next-generation performance technologies into a single processor socket workstation. Dual PCIe X16 Gen2 graphics interfaces provide up to four times the performance of previous graphics interfaces, along with the ability to power multiple displays without compromise.
Additionally, with an 80 PLUS efficient power supply standard and Electronic Products Environmental Assessment Tool (EPEAT) registered configurations available, the HP xw4600 is designed to optimize energy use while maintaining high-performance power.
HP will include customized security configurations that meet Air Force specifications and tests.
The Air Force Information Technology Commodity Council, which includes top Air Force officials, evaluates vendors' submissions for the QEB and their ability to deliver quality enterprise computing in the toughest of environments.
INSIDE MOBILE: How MiFi Provides Mobile Internet Access on Multiple Devices
Wi-Fi is great at home and in the office at providing mobile Internet access around the premises. However, MiFi makes a lot of sense for the traveler who needs mobile Internet access on multiple devices at the same time. Here, Knowledge Center mobile and wireless analyst J. Gerry Purdy explains what MiFi is and how it works to provide simultaneous Internet access for multiple notebook PCs or mobile devices.
There's a really cool solution to getting Internet access while traveling called MiFi (pronounced "My-Fi"). It is a small device that is basically two wireless components in one package: a wide area wireless cellular modem and a Wi-Fi access point. Verizon Wireless provided me with a MiFi unit to test a few weeks ago, and I finally had a trip scheduled in which I could try it out.
My wife and I attended my son Jason's wedding in Maine (held at the beautiful Retreat at French's Point) and stayed in the Belfast Bay Inn, a classy bed and breakfast right in the heart of Belfast, Maine. I set up our three notebook PCs: my Frost & Sullivan system, my personal system and my wife Alicia's system.Resource Library:
In order to get the MiFi working, it has to be provisioned by Verizon Wireless (so it was a valid unit on the network) and then activated (my account was established with Verizon Wireless PR). With the help of Brenda Rainey in Verizon Wireless PR, the unit got provisioned to work and then activated as a demo unit.
Normally, the MiFi unit would require a two-year commitment at $40 for 250MB or $60 per month for 5GB of use. Obviously, since we carry around three notebook PCs, we would have normally had to sign up for three wide area wireless modem accounts: one for each notebook PC or three times $40 to $60 per month (or $120 to $180 per month).
To be sure, many hotels provide Wi-Fi but in many cases they charge anywhere from $9.95 to $19.95 for 24 hours of access. Some hotels—most notably Marriott Courtyard and similar mid-tier hotels that cater to the business traveler—provide free Wi-Fi access. But most of the time (independent of whether you purchased Wi-Fi access or not) the hotel requires you to input your hotel room number and then will only allow one computer to have access to that account number at one time.
My wife Alicia and I spent our first night traveling to the wedding in the Hyatt Regency in Boston. Brenda was working to get my MiFi activated so I had to sign up for one day of Wi-Fi access through T-Mobile. The cost was $9.95 for 24 hours of access, but their system would only provide for one account access at a time.
We had to log out when done looking at e-mail and browsing on one system, log on with the other computer, and sort of continually switch access during that evening and the next morning before leaving for Maine. It was a pain to continually have to switch accounts to get Internet access for our three notebook PCs.
Brenda got my MiFi working the next day, so I set it up in the living area in the Belfast Bay Inn. In order to get it working, you have to attach the MiFi to one of your notebook PCs. The software to activate the MiFi unit self-loads. Once it's activated, you can leave it connected—in which case it operates as a "tethered" wide area wireless access modem. But to make it work as a MiFi, you unplug it from the computer and press the lighted button on the unit. At that point, the access portion of the MiFi begins to transmit its service set identifier (SSID), which I could see from each of our notebook PCs.
The notebook PC shows the Wi-Fi AP with the name "Verizon" with the modem ID and notation as "Security-Enabled" (to make sure others can't get unauthorized access and consume your allotted capacity). When you select it, Windows asks you to enter either a Wired Equivalent Privacy (WEP) key or password. After I entered the password (supplied on the back of the modem), I was able to get concurrent access for all three of our notebook PCs during the remainder of our trip.
There's a really cool solution to getting Internet access while traveling called MiFi (pronounced "My-Fi"). It is a small device that is basically two wireless components in one package: a wide area wireless cellular modem and a Wi-Fi access point. Verizon Wireless provided me with a MiFi unit to test a few weeks ago, and I finally had a trip scheduled in which I could try it out.
My wife and I attended my son Jason's wedding in Maine (held at the beautiful Retreat at French's Point) and stayed in the Belfast Bay Inn, a classy bed and breakfast right in the heart of Belfast, Maine. I set up our three notebook PCs: my Frost & Sullivan system, my personal system and my wife Alicia's system.Resource Library:
In order to get the MiFi working, it has to be provisioned by Verizon Wireless (so it was a valid unit on the network) and then activated (my account was established with Verizon Wireless PR). With the help of Brenda Rainey in Verizon Wireless PR, the unit got provisioned to work and then activated as a demo unit.
Normally, the MiFi unit would require a two-year commitment at $40 for 250MB or $60 per month for 5GB of use. Obviously, since we carry around three notebook PCs, we would have normally had to sign up for three wide area wireless modem accounts: one for each notebook PC or three times $40 to $60 per month (or $120 to $180 per month).
To be sure, many hotels provide Wi-Fi but in many cases they charge anywhere from $9.95 to $19.95 for 24 hours of access. Some hotels—most notably Marriott Courtyard and similar mid-tier hotels that cater to the business traveler—provide free Wi-Fi access. But most of the time (independent of whether you purchased Wi-Fi access or not) the hotel requires you to input your hotel room number and then will only allow one computer to have access to that account number at one time.
My wife Alicia and I spent our first night traveling to the wedding in the Hyatt Regency in Boston. Brenda was working to get my MiFi activated so I had to sign up for one day of Wi-Fi access through T-Mobile. The cost was $9.95 for 24 hours of access, but their system would only provide for one account access at a time.
We had to log out when done looking at e-mail and browsing on one system, log on with the other computer, and sort of continually switch access during that evening and the next morning before leaving for Maine. It was a pain to continually have to switch accounts to get Internet access for our three notebook PCs.
Brenda got my MiFi working the next day, so I set it up in the living area in the Belfast Bay Inn. In order to get it working, you have to attach the MiFi to one of your notebook PCs. The software to activate the MiFi unit self-loads. Once it's activated, you can leave it connected—in which case it operates as a "tethered" wide area wireless access modem. But to make it work as a MiFi, you unplug it from the computer and press the lighted button on the unit. At that point, the access portion of the MiFi begins to transmit its service set identifier (SSID), which I could see from each of our notebook PCs.
The notebook PC shows the Wi-Fi AP with the name "Verizon" with the modem ID and notation as "Security-Enabled" (to make sure others can't get unauthorized access and consume your allotted capacity). When you select it, Windows asks you to enter either a Wired Equivalent Privacy (WEP) key or password. After I entered the password (supplied on the back of the modem), I was able to get concurrent access for all three of our notebook PCs during the remainder of our trip.
Upon Return to Apple, Jobs Focuses on Tablet Device
The Wall Street Journal reports that the return of CEO Steve Jobs, and his dedication to producing a tablet computer, is ruffling some feathers at Apple.
While reports of an impending announcement of a tabletlike device from Apple continue to consume the Internet, new information suggests since CEO Steve Jobs’ return to active duty at the company, his focus has been on the production and launch of such a device.
The Wall Street Journal quoted sources “familiar with the situation” as saying Jobs has been concentrating on a portable, touch-screen device since his return, causing a certain measure of frustration among other Apple employees.
"People have had to readjust” to Jobs’ return, an unnamed employee told the paper, although an e-mail Jobs sent to the WSJ said much of the paper’s information was incorrect, albeit without going into further detail. Jobs took a six-month medical leave of absence from the company in January of this year, approximately five years after he revealed he had been diagnosed with pancreatic cancer. Resource Library:
For months, rumors concerning the release of a tablet device from the company known for its astutely designed, if expensive, computers and consumer devices such as the popular iPhone and iPod digital music player, have piqued the interest of analysts, investors and consumers. Despite the growing popularity of smaller, less expensive netbooks, Apple COO Tim Cook and Jobs have repeatedly stated they have no interest in producing a computer in the $500 range, which is the price point where most netbook manufacturers find themselves around. The tablet device is widely considered to be Apple’s alternative answer to those devices and an attempt to change consumers’ conceptions about tablet computers, which have struggled to find an audience.
Earlier this month, Barron’s reported that an unnamed analyst got a look at the tablet the company has in the works, which features a 10-inch screen and integrated 3G, according to the financial publication. The tablet is expected to be priced between $699 and $799 and, as a media- and game-focused device, be capable of playing high-definition movies. Some pundits predict the tablet will make its debut during Apple’s press event on Sept. 9, though other analysts and research firms suggest a release date closer to January 2010.
Adding to the frenzy of the tablet rumors was a research note released earlier this month by Piper Jaffray, which said the Apple tablet PC will be cheaper than a MacBook but still more expensive than the netbooks that are currently dominating sales on the lower end of the PC market. Despite that higher price point, Piper Jaffray sees an Apple tablet PC as a challenger in the netbook market, as well as competing against mobile devices from companies such as Amazon.com.
The issue of price is still expected to play a role in the release of such a device, at least to the lucrative college student market, according to a recent survey by product review search service Retrevo. While the tablet is expected to be priced within the general netbook market, albeit the higher end, its notebooks, which start at $949, may start to see sales declines.
Retrevo said it polled more than 300 of its 4 million monthly visitors and found that the “majority of student laptop shoppers will not consider buying a Mac,” with price being a considerable factor.
While reports of an impending announcement of a tabletlike device from Apple continue to consume the Internet, new information suggests since CEO Steve Jobs’ return to active duty at the company, his focus has been on the production and launch of such a device.
The Wall Street Journal quoted sources “familiar with the situation” as saying Jobs has been concentrating on a portable, touch-screen device since his return, causing a certain measure of frustration among other Apple employees.
"People have had to readjust” to Jobs’ return, an unnamed employee told the paper, although an e-mail Jobs sent to the WSJ said much of the paper’s information was incorrect, albeit without going into further detail. Jobs took a six-month medical leave of absence from the company in January of this year, approximately five years after he revealed he had been diagnosed with pancreatic cancer. Resource Library:
For months, rumors concerning the release of a tablet device from the company known for its astutely designed, if expensive, computers and consumer devices such as the popular iPhone and iPod digital music player, have piqued the interest of analysts, investors and consumers. Despite the growing popularity of smaller, less expensive netbooks, Apple COO Tim Cook and Jobs have repeatedly stated they have no interest in producing a computer in the $500 range, which is the price point where most netbook manufacturers find themselves around. The tablet device is widely considered to be Apple’s alternative answer to those devices and an attempt to change consumers’ conceptions about tablet computers, which have struggled to find an audience.
Earlier this month, Barron’s reported that an unnamed analyst got a look at the tablet the company has in the works, which features a 10-inch screen and integrated 3G, according to the financial publication. The tablet is expected to be priced between $699 and $799 and, as a media- and game-focused device, be capable of playing high-definition movies. Some pundits predict the tablet will make its debut during Apple’s press event on Sept. 9, though other analysts and research firms suggest a release date closer to January 2010.
Adding to the frenzy of the tablet rumors was a research note released earlier this month by Piper Jaffray, which said the Apple tablet PC will be cheaper than a MacBook but still more expensive than the netbooks that are currently dominating sales on the lower end of the PC market. Despite that higher price point, Piper Jaffray sees an Apple tablet PC as a challenger in the netbook market, as well as competing against mobile devices from companies such as Amazon.com.
The issue of price is still expected to play a role in the release of such a device, at least to the lucrative college student market, according to a recent survey by product review search service Retrevo. While the tablet is expected to be priced within the general netbook market, albeit the higher end, its notebooks, which start at $949, may start to see sales declines.
Retrevo said it polled more than 300 of its 4 million monthly visitors and found that the “majority of student laptop shoppers will not consider buying a Mac,” with price being a considerable factor.
Microsoft to 'ribbonize' Vista with Windows 7 look
Microsoft will offer Windows 7's ribbon-style application interface to Windows Vista users in an update this October, according to the company.
As first reported by Long Zheng, the blogger who writes the popular istartedsomething.com, Microsoft will provide Vista users an optional update that installs the code necessary to display Windows 7's Ribbon framework on its predecessor.
The framework, called "Scenic Ribbon," is a derivation of the ribbon-esque "Fluent" user interface that debuted in Office 2007 two years ago. Both feature a wide ribbon-like display at the top of a window that replaces the traditional drop-down menus, small icons and toolbars that have standardized Windows applications' look-and-feel for decades. Office 2007 faced serious resistance from some users over the ribbon when it launched, although that has subsided over time.
More recently, complaints mounted over plans by OpenOffice.org to overhaul the interface of that open-source productivity suite. Some have blasted the organization for parroting Office 2007's ribbon.
"The Office ribbon sucks. Please don't copy it," wrote one user in a comment to a Sun Microsystems blog. Sun contributes engineering and developer time to OpenOffice.org.
Earlier this year, Microsoft said the ribbon interface would be used by both Microsoft and third-party developers to distinguish new applications for Windows 7 from older versions that ran, say, on Windows XP or Windows Vista.
"This is one of the things we think will differentiate apps written for Windows 7, as opposed to those for earlier versions of Windows," said Mike Nash, the head of Microsoft's Windows product management, in an interview with Computerworld last January.
That plan seems to be in tatters now. Starting in October, application developers will be assured that new software they've crafted to include the Scenic Ribbon interface will also run on Vista.
"A Windows 7 interoperability pack, known as the Windows 7 Client Platform Update, is to be released alongside Windows 7 in October of this year," said Karl Bridge, a Microsoft programming writer, in a message posted last week to a forum on the MSDN (Microsoft Developers Network) site. "This update provides down-level support for the Windows Ribbon framework and will be made available from the Microsoft Download Center and as a 'Recommended update' on Windows Update."
Bridge added that the update will support all versions of Vista, including the entry-level Home Basic and Starter, which for Vista has been sold only in a limited number of markets overseas.
Application developers who build software with Windows 7's ribbon interface will have to point users to Windows Update or Microsoft's download site to grab the Client Platform Update, or silently call Windows Update as part of setup, Bridge said.
Microsoft's most visible "ribbonized" Windows 7 applications are the revamped Paint and retooled Wordpad, the basic image editor and word processor, respectively, bundled with the OS.
Windows XP users will be out of ribbon luck, however, as the October update will not apply to the eight-year-old operating system.
As first reported by Long Zheng, the blogger who writes the popular istartedsomething.com, Microsoft will provide Vista users an optional update that installs the code necessary to display Windows 7's Ribbon framework on its predecessor.
The framework, called "Scenic Ribbon," is a derivation of the ribbon-esque "Fluent" user interface that debuted in Office 2007 two years ago. Both feature a wide ribbon-like display at the top of a window that replaces the traditional drop-down menus, small icons and toolbars that have standardized Windows applications' look-and-feel for decades. Office 2007 faced serious resistance from some users over the ribbon when it launched, although that has subsided over time.
More recently, complaints mounted over plans by OpenOffice.org to overhaul the interface of that open-source productivity suite. Some have blasted the organization for parroting Office 2007's ribbon.
"The Office ribbon sucks. Please don't copy it," wrote one user in a comment to a Sun Microsystems blog. Sun contributes engineering and developer time to OpenOffice.org.
Earlier this year, Microsoft said the ribbon interface would be used by both Microsoft and third-party developers to distinguish new applications for Windows 7 from older versions that ran, say, on Windows XP or Windows Vista.
"This is one of the things we think will differentiate apps written for Windows 7, as opposed to those for earlier versions of Windows," said Mike Nash, the head of Microsoft's Windows product management, in an interview with Computerworld last January.
That plan seems to be in tatters now. Starting in October, application developers will be assured that new software they've crafted to include the Scenic Ribbon interface will also run on Vista.
"A Windows 7 interoperability pack, known as the Windows 7 Client Platform Update, is to be released alongside Windows 7 in October of this year," said Karl Bridge, a Microsoft programming writer, in a message posted last week to a forum on the MSDN (Microsoft Developers Network) site. "This update provides down-level support for the Windows Ribbon framework and will be made available from the Microsoft Download Center and as a 'Recommended update' on Windows Update."
Bridge added that the update will support all versions of Vista, including the entry-level Home Basic and Starter, which for Vista has been sold only in a limited number of markets overseas.
Application developers who build software with Windows 7's ribbon interface will have to point users to Windows Update or Microsoft's download site to grab the Client Platform Update, or silently call Windows Update as part of setup, Bridge said.
Microsoft's most visible "ribbonized" Windows 7 applications are the revamped Paint and retooled Wordpad, the basic image editor and word processor, respectively, bundled with the OS.
Windows XP users will be out of ribbon luck, however, as the October update will not apply to the eight-year-old operating system.
Virtual desktops to the rescue
Back in September 2008, while Louisville, Ky., was recovering from a wind storm that left much of the city without power, IT Director Brian Cox was dreaming not of gentle breezes but of desktop virtualization.
FAQ: Desktop virtualization
The storm left Cox, who is director of IT customer service at Norton Healthcare, scrambling to create temporary desktops for about 200 employees from an outlying billing office who had been knocked off the power grid. "You can go for a day or two without power and get caught up, but once the outage hits three or four days, if you're not getting your bills out the door, especially with time-sensitive Medicare and Medicaid, you don't get paid for services you provided," he says.
Three days into the outage, Cox began setting up workers at PCs in training rooms and other temporary spots and loading up their applications. "If we had had desktop virtualization in place for them, many could have worked from home, a different office or contingency location like a hotel and have had access to their applications right away. We would have been able to say, 'OK, log in here just like you do from the office,' and they'd have been back to work in no time."
Fortunately, the situation wasn't as dire as it could have been. Norton already had embraced a virtual desktop infrastructure for the company's five hospitals, plus a few specialized cases. One of those special instances involved moving billing types that required no "human touch" onto the virtualized infrastructure -- meaning, onto hosted desktops in the data center. "When the power went out, the billing office, the lady running those systems was able to work from home and she got 50% of the bills out the door," Cox says.
From hospital floors to satellite offices
Since the end of 2007, the IT team has deployed 950 virtual desktops, mostly in Norton's five hospitals, for physician and nurse access to a host of applications, including the main healthcare information and picture archiving systems. "We've been able to run just about every single application we've tried on the virtual desktops," Cox says.
Previously, Norton used Citrix Systems' MetaFrame client/server technology to provide access to the healthcare information system, but that had become too limiting. Users wanted to be able to tap into more than just that one application from a terminal, he says.
For the virtualized desktop infrastructure, Norton uses VMware View (formerly VMware Virtual Desktop Infrastructure, or VDI) running on 10 IBM 3850 M2 hosts. Norton has been sprinkling thin clients throughout the hospitals, from which physicians, nurses and other personnel can access applications once they've been authenticated via the hospital's Sentillion single sign-on system. Most clients are Wyse Technology terminals, but Norton also has repurposed some older desktops with a VMware overlay, Cox says. Windows XP is the current operating system in use.
FAQ: Desktop virtualization
The storm left Cox, who is director of IT customer service at Norton Healthcare, scrambling to create temporary desktops for about 200 employees from an outlying billing office who had been knocked off the power grid. "You can go for a day or two without power and get caught up, but once the outage hits three or four days, if you're not getting your bills out the door, especially with time-sensitive Medicare and Medicaid, you don't get paid for services you provided," he says.
Three days into the outage, Cox began setting up workers at PCs in training rooms and other temporary spots and loading up their applications. "If we had had desktop virtualization in place for them, many could have worked from home, a different office or contingency location like a hotel and have had access to their applications right away. We would have been able to say, 'OK, log in here just like you do from the office,' and they'd have been back to work in no time."
Fortunately, the situation wasn't as dire as it could have been. Norton already had embraced a virtual desktop infrastructure for the company's five hospitals, plus a few specialized cases. One of those special instances involved moving billing types that required no "human touch" onto the virtualized infrastructure -- meaning, onto hosted desktops in the data center. "When the power went out, the billing office, the lady running those systems was able to work from home and she got 50% of the bills out the door," Cox says.
From hospital floors to satellite offices
Since the end of 2007, the IT team has deployed 950 virtual desktops, mostly in Norton's five hospitals, for physician and nurse access to a host of applications, including the main healthcare information and picture archiving systems. "We've been able to run just about every single application we've tried on the virtual desktops," Cox says.
Previously, Norton used Citrix Systems' MetaFrame client/server technology to provide access to the healthcare information system, but that had become too limiting. Users wanted to be able to tap into more than just that one application from a terminal, he says.
For the virtualized desktop infrastructure, Norton uses VMware View (formerly VMware Virtual Desktop Infrastructure, or VDI) running on 10 IBM 3850 M2 hosts. Norton has been sprinkling thin clients throughout the hospitals, from which physicians, nurses and other personnel can access applications once they've been authenticated via the hospital's Sentillion single sign-on system. Most clients are Wyse Technology terminals, but Norton also has repurposed some older desktops with a VMware overlay, Cox says. Windows XP is the current operating system in use.
Microsoft to raise some EU Windows 7 prices
Microsoft Corp. said today that it will raise prices for non-upgrade editions of Windows 7 sold in Europe starting Sept. 1.
But because most users will upgrade copies of Windows XP or Vista -- and those upgrade prices will remain unchanged -- or buy a new PC with Windows 7 already installed, only a minority will feel the extra pinch.
Also today, Microsoft announced that it will sell its multilicense "Family Pack" Windows 7 Home Premium upgrade to users in Austria, France, Germany, Ireland, the Netherlands, Sweden, Switzerland and the U.K. for a limited time starting Oct. 22.
Windows 7's price jump is one of the side effects of Microsoft's decision last week to drop the "E" edition for European customers. In countries that use the euro, increases will range from €20 to €80; they will range between £30 and £70 in the U.K.
Three weeks ago, Microsoft ditched its plan, first announced in mid-June, to sell European customers Windows 7E, a version of the upcoming operating system that would omit Internet Explorer 8. The company instead came up with a plan to give Windows 7 users the ability to choose the Web browser they want to use. Although Microsoft has not gotten the green light from EU antitrust regulators that the so-called browser ballot screen scheme will be accepted, it was confident enough in its chances to back away from a Europe-only edition.
Windows 7E and the ballot screen are two concessions Microsoft has made this year in an attempt to prevent antitrust officials from levying fines or demanding more significant changes to the company's practice of bundling IE with Windows.
Because Windows 7E could only be offered in a so-called "full" version that required a clean install -- an in-place upgrade would have left IE on users' PCs -- Microsoft had planned to sell only those full, or non-upgrade, editions in Europe, but at the upgrade versions' prices.
That will change as of Sept. 1, Microsoft said today. "[This] means that we are now able to have an upgrade version of Windows 7 available in Europe at launch," said Microsoft spokesman Brandon LeBlanc in an entry on a company blog today. "Windows 7 retail boxes will be available in both Full and Upgrade versions via pre-orders through Microsoft online stores where available and our retail partners starting September 1 and at General Availability on October 22."
Unlike an upgrade of Windows, a "full" edition can be installed on a machine not running Microsoft's operating system.
After Sept. 1, prices for Windows 7 upgrades in the U.K. will be £79.99 for Home Premium, £189.99 for Professional and £199.99 for Ultimate. In other countries, prices will be €119.99 for Home Premium, €285 for Professional and €299 for Ultimate.
New prices for the "full" versions in the U.K. will be £149.99 (Home Premium), £219.99 (Professional) and £229.99 (Ultimate). Prices for the same editions in countries that use the euro will be €199.99 (Home Premium), €309 (Professional) and €319 (Ultimate).
Those prices take effect Sept. 1.
People who have already pre-ordered Windows 7E, and who continue to do so through Aug. 31, will receive the full versions, as originally promised, LeBlanc said.
Microsoft also added eight more countries to the list of those where it will sell a Family Pack starting Oct. 22, Windows 7's official on-sale date. The packs let buyers upgrade as many as three PCs from Windows XP or Vista to Windows 7 Home Premium.
LeBlanc tied the availability of a Family Pack in those eight European countries to the decision to kill Windows 7E. "So what changed to make this possible? Basically, the fact that we are now able to have an upgrade version of Windows 7 available at launch," LeBlanc said.
Late last month, Microsoft announced that it would sell the Family Pack to U.S. and Canadian customers.
As has been its practice, Microsoft set the prices of the European editions of the Family Pack at amounts that are much higher than U.S. prices when currency exchange rates are taken into account.
In the U.K., the three-license pack will go for £149.99, or $246.03 at current exchange rates; that's $96.04 more than the $149.99 U.S. price tag. Customers elsewhere in Europe will pay €149.99 for Family Pack, or $214.56 at today's exchange rate, for a $64.57 premium over the U.S. price.
Like in the U.S., however, the Family Pack can dramatically drive down the price of upgrading several machines for European users. In the U.K., the pack costs £92.98, which is less than the cost of three separate Home Premium upgrade licenses, while in the rest of Europe the savings comes to €209.98.
Microsoft has said that it will start taking pre-orders for Family Pack Oct. 18. The company has declined to specify how long the limited time offer will run or, if it's pegged to unit sales, at what point Microsoft will stop selling Family Pack.
But because most users will upgrade copies of Windows XP or Vista -- and those upgrade prices will remain unchanged -- or buy a new PC with Windows 7 already installed, only a minority will feel the extra pinch.
Also today, Microsoft announced that it will sell its multilicense "Family Pack" Windows 7 Home Premium upgrade to users in Austria, France, Germany, Ireland, the Netherlands, Sweden, Switzerland and the U.K. for a limited time starting Oct. 22.
Windows 7's price jump is one of the side effects of Microsoft's decision last week to drop the "E" edition for European customers. In countries that use the euro, increases will range from €20 to €80; they will range between £30 and £70 in the U.K.
Three weeks ago, Microsoft ditched its plan, first announced in mid-June, to sell European customers Windows 7E, a version of the upcoming operating system that would omit Internet Explorer 8. The company instead came up with a plan to give Windows 7 users the ability to choose the Web browser they want to use. Although Microsoft has not gotten the green light from EU antitrust regulators that the so-called browser ballot screen scheme will be accepted, it was confident enough in its chances to back away from a Europe-only edition.
Windows 7E and the ballot screen are two concessions Microsoft has made this year in an attempt to prevent antitrust officials from levying fines or demanding more significant changes to the company's practice of bundling IE with Windows.
Because Windows 7E could only be offered in a so-called "full" version that required a clean install -- an in-place upgrade would have left IE on users' PCs -- Microsoft had planned to sell only those full, or non-upgrade, editions in Europe, but at the upgrade versions' prices.
That will change as of Sept. 1, Microsoft said today. "[This] means that we are now able to have an upgrade version of Windows 7 available in Europe at launch," said Microsoft spokesman Brandon LeBlanc in an entry on a company blog today. "Windows 7 retail boxes will be available in both Full and Upgrade versions via pre-orders through Microsoft online stores where available and our retail partners starting September 1 and at General Availability on October 22."
Unlike an upgrade of Windows, a "full" edition can be installed on a machine not running Microsoft's operating system.
After Sept. 1, prices for Windows 7 upgrades in the U.K. will be £79.99 for Home Premium, £189.99 for Professional and £199.99 for Ultimate. In other countries, prices will be €119.99 for Home Premium, €285 for Professional and €299 for Ultimate.
New prices for the "full" versions in the U.K. will be £149.99 (Home Premium), £219.99 (Professional) and £229.99 (Ultimate). Prices for the same editions in countries that use the euro will be €199.99 (Home Premium), €309 (Professional) and €319 (Ultimate).
Those prices take effect Sept. 1.
People who have already pre-ordered Windows 7E, and who continue to do so through Aug. 31, will receive the full versions, as originally promised, LeBlanc said.
Microsoft also added eight more countries to the list of those where it will sell a Family Pack starting Oct. 22, Windows 7's official on-sale date. The packs let buyers upgrade as many as three PCs from Windows XP or Vista to Windows 7 Home Premium.
LeBlanc tied the availability of a Family Pack in those eight European countries to the decision to kill Windows 7E. "So what changed to make this possible? Basically, the fact that we are now able to have an upgrade version of Windows 7 available at launch," LeBlanc said.
Late last month, Microsoft announced that it would sell the Family Pack to U.S. and Canadian customers.
As has been its practice, Microsoft set the prices of the European editions of the Family Pack at amounts that are much higher than U.S. prices when currency exchange rates are taken into account.
In the U.K., the three-license pack will go for £149.99, or $246.03 at current exchange rates; that's $96.04 more than the $149.99 U.S. price tag. Customers elsewhere in Europe will pay €149.99 for Family Pack, or $214.56 at today's exchange rate, for a $64.57 premium over the U.S. price.
Like in the U.S., however, the Family Pack can dramatically drive down the price of upgrading several machines for European users. In the U.K., the pack costs £92.98, which is less than the cost of three separate Home Premium upgrade licenses, while in the rest of Europe the savings comes to €209.98.
Microsoft has said that it will start taking pre-orders for Family Pack Oct. 18. The company has declined to specify how long the limited time offer will run or, if it's pegged to unit sales, at what point Microsoft will stop selling Family Pack.
Bad Software Design Inhibits Use of Enterprise Apps
Wondering why your company's staffers are using only a fraction of the software features and functionality that your bounteous enterprise software offers?
Harold Hambrose can give you an answer. In fact, Hambrose, founder of Electronic Ink, a consultancy specializing in designing and developing business systems, wrote a book about what he claims is the $60 billion that U.S. businesses will waste this fiscal year on poorly designed software.
The new book, Wrench in the System (Wiley), takes a scathing look at business software development practices, especially the products of enterprise vendors. "Software manufacturers are generally confident that their products will succeed on the strength of their technology," Hambrose writes. "But products that don't appeal to their users can be self-defeating. Whenever software systems create obstacles-technical jargon, ambiguous messages, illogical sequences or visual clutter-the people who use these systems will respond in a variety of ways." That typically includes undesired behaviors that users (and CIOs and applications managers) know all too well-frustrating and inefficient workarounds, complete disregard for business process, or abandonment of the application altogether.
Hambrose went to Carnegie Mellon University for graphic design and, later, contributed to the user interface for IBM's OS/2 and the first computerized patient record for First Data, notes his bio. He founded Electronic Ink in 1990 and has since worked with British Petroleum, Comcast, McDonald's, Research in Motion, among other Fortune 500 firms on software design issues.
In his book, Hambrose offers advice and explains low-cost development changes that can make a huge difference. CIO.com Senior Editor Thomas Wailgum recently talked with Hambrose about user frustrations, why most packaged vendors apps are poorly designed, and why he wrote the book.
"I wanted to be able to give a larger audience the tools to push back on IT vendors more effectively and perhaps give some power to people in trenches to stand up and say: 'This is why this system sucks for me,'" Hambrose says. "I hope the book appeals to not only the CIOs out there, but to the doctors, nurses, stockbrokers and whoever else is wrestling with systems that have been put in their hands with the best intentions, but yet they're still wrestling with them."
CIO.com: What's the biggest pushback you hear from potential customers or software vendors?
Harold Hambrose: In the larger ERP environments, the biggest pushback you hear is: "Oh, you're a design team, and you're going to propose we customize our SAP or Oracle system." And that's just not true. What we represent is a method to configure the system-prelaunch-that improves usability and adoption.
Another pushback I hear is: "I have business analysts around the table, so don't they do what you folks do?" No. In fact, we want you to replace business analysts with designers. We know that you folks understand the business. What designers afford you to do is to model that in new ways and all those models allow you to change the way you're thinking with this tool.
Harold Hambrose can give you an answer. In fact, Hambrose, founder of Electronic Ink, a consultancy specializing in designing and developing business systems, wrote a book about what he claims is the $60 billion that U.S. businesses will waste this fiscal year on poorly designed software.
The new book, Wrench in the System (Wiley), takes a scathing look at business software development practices, especially the products of enterprise vendors. "Software manufacturers are generally confident that their products will succeed on the strength of their technology," Hambrose writes. "But products that don't appeal to their users can be self-defeating. Whenever software systems create obstacles-technical jargon, ambiguous messages, illogical sequences or visual clutter-the people who use these systems will respond in a variety of ways." That typically includes undesired behaviors that users (and CIOs and applications managers) know all too well-frustrating and inefficient workarounds, complete disregard for business process, or abandonment of the application altogether.
Hambrose went to Carnegie Mellon University for graphic design and, later, contributed to the user interface for IBM's OS/2 and the first computerized patient record for First Data, notes his bio. He founded Electronic Ink in 1990 and has since worked with British Petroleum, Comcast, McDonald's, Research in Motion, among other Fortune 500 firms on software design issues.
In his book, Hambrose offers advice and explains low-cost development changes that can make a huge difference. CIO.com Senior Editor Thomas Wailgum recently talked with Hambrose about user frustrations, why most packaged vendors apps are poorly designed, and why he wrote the book.
"I wanted to be able to give a larger audience the tools to push back on IT vendors more effectively and perhaps give some power to people in trenches to stand up and say: 'This is why this system sucks for me,'" Hambrose says. "I hope the book appeals to not only the CIOs out there, but to the doctors, nurses, stockbrokers and whoever else is wrestling with systems that have been put in their hands with the best intentions, but yet they're still wrestling with them."
CIO.com: What's the biggest pushback you hear from potential customers or software vendors?
Harold Hambrose: In the larger ERP environments, the biggest pushback you hear is: "Oh, you're a design team, and you're going to propose we customize our SAP or Oracle system." And that's just not true. What we represent is a method to configure the system-prelaunch-that improves usability and adoption.
Another pushback I hear is: "I have business analysts around the table, so don't they do what you folks do?" No. In fact, we want you to replace business analysts with designers. We know that you folks understand the business. What designers afford you to do is to model that in new ways and all those models allow you to change the way you're thinking with this tool.
Desktop multiprocessing: Not so fast
Not every application can be reprogrammed for multicore architectures, and some bottlenecks will always remain. Here's why.
Until recently, you could reasonably expect this year's software to run faster on next year's machines, but that's not necessarily true going forward. For the foreseeable future, significant performance improvements are likely to be achieved only through arduous reprogramming.
Some time ago, computer vendors passed the point of diminishing returns concerning processor clock speeds, and could no longer keep hiking frequency rates. To maintain continued performance improvements, suppliers turned to installing multiple instances of the processor -- multiple cores -- on a processor chip, and as a result, multicore processors are now mainstream for desktops. But to realize any performance improvements the software has to be able to use those multiple cores.
And to do that, most software will need to be rewritten.
"We have to reinvent computing, and get away from the fundamental premises we inherited from von Neumann," says Burton Smith, technical fellow at Microsoft Corp., referring to the theories of computer science pioneer John von Neumann (1903 - 1957). "He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time."
But software cannot always keep up with the advances in hardware, says Tom Halfhill, senior analyst for the Microprocessor Report newsletter in Scottsdale, Ariz. "If you have a task that cannot be parallelized and you are currently on a plateau of performance in a single-processor environment, you will not see that task getting significantly faster in the future."
New law in town
For four decades, computer performance progress was defined by Moore's Law, which said that the number of devices that could economically be placed on a chip would double every other year. A side effect was that the smaller circuits allowed faster clock speeds, meaning software would run faster without any effort from programmers. But overheating problems on CPU chips have changed everything.
"The industry has hit the wall when it comes to increasing clock frequency and power consumption," says Halfhill. There are some chips edging above 4GHz, "but those are extreme cases," he says. The mainstream is still below 3GHz. "The main way forward is through multiple processors."
By adding more cores to the CPU, vendors offer the possibility of higher performance. But realizing higher performance through multiple cores assumes that the software knows about those cores, and will use them to run code segments in parallel.
Even when the software does that, the results are gated by Amdahl's Law. Sometimes called Amdahl's Curse, and named for computer pioneer Gene Amdahl, it lacks the upbeat outlook of Moore's Law. It says that the expected improvement from parallelization is 1 divided by the percentage of the task that cannot be parallelized plus the improved run time of the parallelized segment.
In other words, "It says that the serial portion of a computation limits the total speedup you can get through parallelization," says Russell Williams, chief architect for Photoshop at Adobe Systems in San Jose, Calif. "If 10% of a computation is serial and can't be parallelized, then even if you have an infinite number of infinitely fast processors, you could only get the computation to run 10 times faster."
Until recently, you could reasonably expect this year's software to run faster on next year's machines, but that's not necessarily true going forward. For the foreseeable future, significant performance improvements are likely to be achieved only through arduous reprogramming.
Some time ago, computer vendors passed the point of diminishing returns concerning processor clock speeds, and could no longer keep hiking frequency rates. To maintain continued performance improvements, suppliers turned to installing multiple instances of the processor -- multiple cores -- on a processor chip, and as a result, multicore processors are now mainstream for desktops. But to realize any performance improvements the software has to be able to use those multiple cores.
And to do that, most software will need to be rewritten.
"We have to reinvent computing, and get away from the fundamental premises we inherited from von Neumann," says Burton Smith, technical fellow at Microsoft Corp., referring to the theories of computer science pioneer John von Neumann (1903 - 1957). "He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time."
But software cannot always keep up with the advances in hardware, says Tom Halfhill, senior analyst for the Microprocessor Report newsletter in Scottsdale, Ariz. "If you have a task that cannot be parallelized and you are currently on a plateau of performance in a single-processor environment, you will not see that task getting significantly faster in the future."
New law in town
For four decades, computer performance progress was defined by Moore's Law, which said that the number of devices that could economically be placed on a chip would double every other year. A side effect was that the smaller circuits allowed faster clock speeds, meaning software would run faster without any effort from programmers. But overheating problems on CPU chips have changed everything.
"The industry has hit the wall when it comes to increasing clock frequency and power consumption," says Halfhill. There are some chips edging above 4GHz, "but those are extreme cases," he says. The mainstream is still below 3GHz. "The main way forward is through multiple processors."
By adding more cores to the CPU, vendors offer the possibility of higher performance. But realizing higher performance through multiple cores assumes that the software knows about those cores, and will use them to run code segments in parallel.
Even when the software does that, the results are gated by Amdahl's Law. Sometimes called Amdahl's Curse, and named for computer pioneer Gene Amdahl, it lacks the upbeat outlook of Moore's Law. It says that the expected improvement from parallelization is 1 divided by the percentage of the task that cannot be parallelized plus the improved run time of the parallelized segment.
In other words, "It says that the serial portion of a computation limits the total speedup you can get through parallelization," says Russell Williams, chief architect for Photoshop at Adobe Systems in San Jose, Calif. "If 10% of a computation is serial and can't be parallelized, then even if you have an infinite number of infinitely fast processors, you could only get the computation to run 10 times faster."
Sunday, August 23, 2009
New IDC Viewpoint Research "Removing Storage-Related Barriers to Server and Desktop Virtualization" - Now Available for Download at DataCore Software
DataCore Software, a leading provider of storage virtualization, business continuity and disaster recovery software solutions, today announced that a new IDC Viewpoint research paper titled “Removing Storage-Related Barriers to Server and Desktop Virtualization” is now available for free download.
IDC Viewpoint Report Availability
The IDC Viewpoint report “Removing storage-Related Barriers to Server and Desktop Virtualization” is available now and may be downloaded by going to: http://www.datacore.com/forms/form_request.asp?id=IDCview
The IDC Viewpoint discusses, “An alternative to costly investments in high-end storage systems. It proposes using storage virtualization software to create scalable, robust SANs using equipment already in place. This hardware-independent approach complements server and desktop virtualization without compromising availability, speed, or project schedules…Just as importantly, it can significantly lower capital and operational expenditure for physical and virtual environments alike, making such transitional initiatives viable.” *
Extending Virtualization to the SAN
“In addition to server virtualization, industry analysts are now grasping the real benefits of storage virtualization,” states George Teixeira, president and CEO, DataCore Software. “Software-based storage virtualization is important because it helps IT organizations get more out of their existing hardware investments – and it does so by enabling IT organizations to turn existing storage arrays from multiple vendors into a shared pool of disk storage. Creating virtual storage pools out of existing storage investments, which easily marry with virtual servers, represents the real value that storage virtualization software delivers.”
The research report covers the following topics:
What makes server, desktop, and storage virtualization attractive?
What are the Challenges to Implementing Virtualization?
Extending Virtualization to the SAN
Key Considerations When Choosing a Storage Virtualization Software Solution
Key IDC recommendations included in the report:
Choose storage virtualization software that is not tied to any one hardware vendor so that you will have the most latitude when selecting future devices.
Ensure that the storage virtualization software you pick for virtual systems also addresses your physical servers and competing server virtualization platforms. Otherwise, you may end up fragmenting the IT environment that you are eager to consolidate.
Adds Teixeira, “It is nice to see that after all the rush to embrace server virtualization there is now an increasing interest in storage virtualization. Most storage hardware vendors require customers to buy new storage arrays that support storage virtualization. But in these difficult economic times, it's hard to make an argument for capital expenditures that are so dear.”
* Source: “Removing Storage-Related Barriers to Server and Desktop Virtualization,” an IDC Viewpoint research document published as part of an IDC continuous intelligence service. Author: Carla Arend, European Storage Software and Services, IDC EMEA. Publication date: July 2009.
Free 30-day trial – Try DataCore Today!
For a free 30-day test drive, please visit: http://www.datacore.com/trialsoftware.
IDC Viewpoint Report Availability
The IDC Viewpoint report “Removing storage-Related Barriers to Server and Desktop Virtualization” is available now and may be downloaded by going to: http://www.datacore.com/forms/form_request.asp?id=IDCview
The IDC Viewpoint discusses, “An alternative to costly investments in high-end storage systems. It proposes using storage virtualization software to create scalable, robust SANs using equipment already in place. This hardware-independent approach complements server and desktop virtualization without compromising availability, speed, or project schedules…Just as importantly, it can significantly lower capital and operational expenditure for physical and virtual environments alike, making such transitional initiatives viable.” *
Extending Virtualization to the SAN
“In addition to server virtualization, industry analysts are now grasping the real benefits of storage virtualization,” states George Teixeira, president and CEO, DataCore Software. “Software-based storage virtualization is important because it helps IT organizations get more out of their existing hardware investments – and it does so by enabling IT organizations to turn existing storage arrays from multiple vendors into a shared pool of disk storage. Creating virtual storage pools out of existing storage investments, which easily marry with virtual servers, represents the real value that storage virtualization software delivers.”
The research report covers the following topics:
What makes server, desktop, and storage virtualization attractive?
What are the Challenges to Implementing Virtualization?
Extending Virtualization to the SAN
Key Considerations When Choosing a Storage Virtualization Software Solution
Key IDC recommendations included in the report:
Choose storage virtualization software that is not tied to any one hardware vendor so that you will have the most latitude when selecting future devices.
Ensure that the storage virtualization software you pick for virtual systems also addresses your physical servers and competing server virtualization platforms. Otherwise, you may end up fragmenting the IT environment that you are eager to consolidate.
Adds Teixeira, “It is nice to see that after all the rush to embrace server virtualization there is now an increasing interest in storage virtualization. Most storage hardware vendors require customers to buy new storage arrays that support storage virtualization. But in these difficult economic times, it's hard to make an argument for capital expenditures that are so dear.”
* Source: “Removing Storage-Related Barriers to Server and Desktop Virtualization,” an IDC Viewpoint research document published as part of an IDC continuous intelligence service. Author: Carla Arend, European Storage Software and Services, IDC EMEA. Publication date: July 2009.
Free 30-day trial – Try DataCore Today!
For a free 30-day test drive, please visit: http://www.datacore.com/trialsoftware.
GlassHouse Technologies and Splunk Outline Steps to Secure Virtual Environments
GlassHouse Technologies, the leading independent IT infrastructure consulting and services firm, today announced the availability of a whitepaper that provides insight on securing virtual environments. Co-authored by Splunk, the foremost IT Search company, the paper entitled “Does Virtualization Change Your Approach to Enterprise Security?” focuses on how enterprises can mitigate security risks in their virtual settings in an efficient and cost-effective manner. Consultants from GlassHouse Technologies and representatives from Splunk will also be available to discuss these findings and other emerging virtualization trends at the VMworld conference on August 31 – September 3.
While organizations have rushed to implement virtualization and achieve the promised benefits, many have overlooked the proper strategy necessary to secure this environment. To help enterprises combat the growing concerns over virtual security, this research focuses on best practices that should be implemented to ensure virtual components are meeting all organization security protocols without hindering the performance of the infrastructure.
Specifically, the whitepaper explores the following components:
Aligning security strategy with business risk tolerance
Securing virtual machines, including like physical machines
Security monitoring of virtual environments including administrative virtualization management interface, and access to the virtual machine files
These specific strategies will be discussed in greater detail by Splunk and GlassHouse consultants at VMworld. This year’s conference will bring together attendees from across the globe to discuss trends and challenges in the virtualization space. Make sure to look for the GlassHouse “Conversation Cloud” at the show to hear more about virtual security as well the consultants views on emerging cloud trends in storage, security and data center management. GlassHouse will also host an event at the show bringing together customers, partners and industry experts to continue VMworld discussions.
While organizations have rushed to implement virtualization and achieve the promised benefits, many have overlooked the proper strategy necessary to secure this environment. To help enterprises combat the growing concerns over virtual security, this research focuses on best practices that should be implemented to ensure virtual components are meeting all organization security protocols without hindering the performance of the infrastructure.
Specifically, the whitepaper explores the following components:
Aligning security strategy with business risk tolerance
Securing virtual machines, including like physical machines
Security monitoring of virtual environments including administrative virtualization management interface, and access to the virtual machine files
These specific strategies will be discussed in greater detail by Splunk and GlassHouse consultants at VMworld. This year’s conference will bring together attendees from across the globe to discuss trends and challenges in the virtualization space. Make sure to look for the GlassHouse “Conversation Cloud” at the show to hear more about virtual security as well the consultants views on emerging cloud trends in storage, security and data center management. GlassHouse will also host an event at the show bringing together customers, partners and industry experts to continue VMworld discussions.
VMware vSphere Training Video Now Available from TrainSignal
Great news! For those of you looking at, moving to or already running VMware's latest virtualization platform, vSphere 4.0, TrainSignal has announced the launch of its latest virtualization training video, VMware vSphere Training.
Like other virtualization training series from TrainSignal, this one was created and presented by David Davis. This particular video contains 17 hours of video training that includes multiple formats like AVI, WMV, iPod/iPhone, and MP3 - so, a format that should please most everyone. The video starts from the planning and implementation of vSphere 4 and moves all the way into advanced features like Fault Tolerance (FT), Data Recovery, and vDS.
I've been at this virtualization game now for more than 10 years. And I must say, there are so few people in this industry that can create and pull off these types of training videos as well as David Davis and TrainSignal. These TrainSignal videos are put together extremely well - top notch in my mind. And David Davis has a unique way of explaining his topics in a single video series that reaches across a wide audience: beginners, novice and advanced users alike. No matter what path you find yourself on in your virtualization journey, I believe there is something for everyone in these videos. And I highly recommend them to everyone.
You can find out more information and purchase the new TrainSignal VMware vSphere Training video now.
TrainSignal will also be at VMworld this year.
Like other virtualization training series from TrainSignal, this one was created and presented by David Davis. This particular video contains 17 hours of video training that includes multiple formats like AVI, WMV, iPod/iPhone, and MP3 - so, a format that should please most everyone. The video starts from the planning and implementation of vSphere 4 and moves all the way into advanced features like Fault Tolerance (FT), Data Recovery, and vDS.
I've been at this virtualization game now for more than 10 years. And I must say, there are so few people in this industry that can create and pull off these types of training videos as well as David Davis and TrainSignal. These TrainSignal videos are put together extremely well - top notch in my mind. And David Davis has a unique way of explaining his topics in a single video series that reaches across a wide audience: beginners, novice and advanced users alike. No matter what path you find yourself on in your virtualization journey, I believe there is something for everyone in these videos. And I highly recommend them to everyone.
You can find out more information and purchase the new TrainSignal VMware vSphere Training video now.
TrainSignal will also be at VMworld this year.
Verizon Business Helps Customers Unlock the Power of Virtualization
With virtualization in high demand by enterprises looking to boost efficiency and flexibility while controlling costs, Verizon Business is offering a series of tips for effectively planning and organizing the often-complex of task of implementing virtualization technology.
Virtualization uses technology to remove the physical barriers associated with servers and applications, enabling the consolidation or replacement of servers, storage, network and other physical devices. As a result, companies can better use computing capacity and drive more value from IT resources as well as consolidate data centers and lower energy consumption.
According to analysts at IDC, virtualization is one of the most sought-after IT technologies today, with services aimed at delivering virtualization projected to grow to nearly $16 billion by 2013, up from $8.7 billion in 2008.
Virtualization uses technology to remove the physical barriers associated with servers and applications, enabling the consolidation or replacement of servers, storage, network and other physical devices. As a result, companies can better use computing capacity and drive more value from IT resources as well as consolidate data centers and lower energy consumption.
According to analysts at IDC, virtualization is one of the most sought-after IT technologies today, with services aimed at delivering virtualization projected to grow to nearly $16 billion by 2013, up from $8.7 billion in 2008.
Zeus Highlights Results of VMware vSphere 4 Test
Zeus Technology, the only software-based application traffic management company, today announced the results of a performance test on VMware vSphere™ 4.
Compared to the performance of Zeus Traffic Manager software running directly on standard hardware, the Zeus Virtual Appliance offered outstanding results. The Zeus software on VMware vSphere™ 4 out-performed the native hardware by 15 - 20% in some tests, while achieving at least 85% - 90% in every test case.
The tests considered network-limited activities (requests-per-second, bandwidth and caching performance) and CPU-limited activities (Secure Socket Layer performance). Compared to VMware ESX 3.5, VMware vSphere™ 4 was on average 25% faster in all network tests.
David Day, CTO, Zeus Technology, comments: “We have recently undertaken some rigorous testing on VMware vSphere™ 4 and have achieved outstanding results. These tests demonstrate the Zeus Virtual Appliance software on VMware vSphere™ 4, can deliver a much higher performance than is required by the vast majority of websites, even during peak periods. The analysis provides further evidence that using Zeus in a Virtualized environment to handle load-balancing and application traffic management is achievable without the need to compromise on performance.”
“VMware provides the ideal infrastructure for customers to efficiently run their business-critical applications and for technology partners like Zeus, to deploy complementary solutions for application traffic management in the form of virtual appliances,” said Shekar Ayyar, vice president, infrastructure alliances, VMware. “This new benchmark from Zeus further validates that applications can run with superior performance in VMware Virtualized environments.”
The performance figures were obtained using Zeus software running on a Dell PowerEdge 2950 server equipped with an Intel(R) Quad-core Xeon(r) E5450 processor.
For further information and to view the performance figures the Zeus Virtual Appliance software gained on VMware vSphere™ 4
Compared to the performance of Zeus Traffic Manager software running directly on standard hardware, the Zeus Virtual Appliance offered outstanding results. The Zeus software on VMware vSphere™ 4 out-performed the native hardware by 15 - 20% in some tests, while achieving at least 85% - 90% in every test case.
The tests considered network-limited activities (requests-per-second, bandwidth and caching performance) and CPU-limited activities (Secure Socket Layer performance). Compared to VMware ESX 3.5, VMware vSphere™ 4 was on average 25% faster in all network tests.
David Day, CTO, Zeus Technology, comments: “We have recently undertaken some rigorous testing on VMware vSphere™ 4 and have achieved outstanding results. These tests demonstrate the Zeus Virtual Appliance software on VMware vSphere™ 4, can deliver a much higher performance than is required by the vast majority of websites, even during peak periods. The analysis provides further evidence that using Zeus in a Virtualized environment to handle load-balancing and application traffic management is achievable without the need to compromise on performance.”
“VMware provides the ideal infrastructure for customers to efficiently run their business-critical applications and for technology partners like Zeus, to deploy complementary solutions for application traffic management in the form of virtual appliances,” said Shekar Ayyar, vice president, infrastructure alliances, VMware. “This new benchmark from Zeus further validates that applications can run with superior performance in VMware Virtualized environments.”
The performance figures were obtained using Zeus software running on a Dell PowerEdge 2950 server equipped with an Intel(R) Quad-core Xeon(r) E5450 processor.
For further information and to view the performance figures the Zeus Virtual Appliance software gained on VMware vSphere™ 4
New Distributed Desktop Virtualization to Transform Enterprise Desktop Management
Wanova, Inc. today announced Distributed Desktop Virtualization (DDV) - an entirely new architecture that transforms how companies manage, support and protect desktops and laptops, particularly remote and mobile endpoints. The Wanova DDV solution centralizes the entire desktop contents in the data center for management and protection purposes while distributing the execution of desktop workloads to the endpoints for superior user experience. In related news, the company has emerged from stealth mode and announced $13 million in A-round funding.
“Despite its promises, adoption of desktop virtualization has been limited, largely due to the constraints of today’s point solutions. The problem can’t be solved solely by targeting the client, the server or even the WAN,” said Issy Ben-Shaul, CTO, Wanova. “Our virtualization architecture offers a new approach that integrates all three components – IT managers get powerful centralized management and control, the network is utilized efficiently, and remote workers get the performance they expect."
Because of this unique architecture, Wanova has demonstrated the ability to significantly reduce IT costs and improve support service level agreements. In one field test, Wanova was able to re-image an entire desktop over the WAN in just seven minutes, and conduct a complete PC restore over the WAN with the end-user up and running in 10 minutes. Typical IT support processes might take hours or even days to diagnose and repair the same computer.
"We’ve been seeing a gradual shift towards worker mobility evidenced by the notebook sales beginning to surpass those of desktop PCs. At the same time that workers are becoming increasingly mobile and distributed, IT is being tasked with reducing costs and increasing control and compliance. Wanova’s new architecture is a holistic solution that addresses these challenges and can generate serious attention in distributed enterprises,” said Michael Rose, Research Analyst at IDC.
How Wanova's Distributed Desktop Virtualization Works
Wanova’s Distributed Desktop Virtualization provides a Centralized Virtual Desktop (CVD) in the data center. At the endpoint, Wanova’s DeskCache™ client executes a complete, local desktop instance, while Distributed Desktop Optimization (DDO) enables real-time, bi-directional transfers between the CVD and the DeskCache. Wanova also provides single image management, including mass provisioning and continuous enforcement of the base image on all computers, while enabling persistent personalization including user-installed applications.
Execution of desktop workloads is performed directly on the desktop or the laptop using the local DeskCache, resulting in a superior end-user experience with native performance and full support for offline use. Additionally, Wanova does not require a client hypervisor, so IT benefits from a complete solution that does not add additional management complexity.
Wanova’s DDV architecture is unique in that it combines advanced network optimization, desktop streaming over the WAN and image layering technologies to provide an extremely fast and optimal transport of desktop workloads. It is the first desktop virtualization approach that effectively bridges the gap between centralized management and distributed execution. Technical details can be found at www.wanova.com/pages/wanova-products.html.
Wanova’s solution is currently in field testing with early customers. Wanova will also be introduced in the New Innovators Pavilion at the VMworld 2009 Conference, August 31-Sepetmber 3 at the Moscone Center in San Francisco.
“Despite its promises, adoption of desktop virtualization has been limited, largely due to the constraints of today’s point solutions. The problem can’t be solved solely by targeting the client, the server or even the WAN,” said Issy Ben-Shaul, CTO, Wanova. “Our virtualization architecture offers a new approach that integrates all three components – IT managers get powerful centralized management and control, the network is utilized efficiently, and remote workers get the performance they expect."
Because of this unique architecture, Wanova has demonstrated the ability to significantly reduce IT costs and improve support service level agreements. In one field test, Wanova was able to re-image an entire desktop over the WAN in just seven minutes, and conduct a complete PC restore over the WAN with the end-user up and running in 10 minutes. Typical IT support processes might take hours or even days to diagnose and repair the same computer.
"We’ve been seeing a gradual shift towards worker mobility evidenced by the notebook sales beginning to surpass those of desktop PCs. At the same time that workers are becoming increasingly mobile and distributed, IT is being tasked with reducing costs and increasing control and compliance. Wanova’s new architecture is a holistic solution that addresses these challenges and can generate serious attention in distributed enterprises,” said Michael Rose, Research Analyst at IDC.
How Wanova's Distributed Desktop Virtualization Works
Wanova’s Distributed Desktop Virtualization provides a Centralized Virtual Desktop (CVD) in the data center. At the endpoint, Wanova’s DeskCache™ client executes a complete, local desktop instance, while Distributed Desktop Optimization (DDO) enables real-time, bi-directional transfers between the CVD and the DeskCache. Wanova also provides single image management, including mass provisioning and continuous enforcement of the base image on all computers, while enabling persistent personalization including user-installed applications.
Execution of desktop workloads is performed directly on the desktop or the laptop using the local DeskCache, resulting in a superior end-user experience with native performance and full support for offline use. Additionally, Wanova does not require a client hypervisor, so IT benefits from a complete solution that does not add additional management complexity.
Wanova’s DDV architecture is unique in that it combines advanced network optimization, desktop streaming over the WAN and image layering technologies to provide an extremely fast and optimal transport of desktop workloads. It is the first desktop virtualization approach that effectively bridges the gap between centralized management and distributed execution. Technical details can be found at www.wanova.com/pages/wanova-products.html.
Wanova’s solution is currently in field testing with early customers. Wanova will also be introduced in the New Innovators Pavilion at the VMworld 2009 Conference, August 31-Sepetmber 3 at the Moscone Center in San Francisco.
The SCO Group Releases Virtualized Version of Popular OpenServer 5.0.7 UNIX Operating System
The SCO Group, Inc., a leading provider of UNIX software technology and mobility solutions, today announced that it has released OpenServer 5.0.7V, a virtualized version of its popular UNIX operating system that is optimized for the VMware environment. OpenServer 5.0.7V gives customers a familiar environment while increasing the power and efficiency of a virtualized infrastructure. With OpenServer's renowned stability and reliability, now available in a virtualized environment, customers can avoid costly migration and retooling costs in order to take advantage of newer hardware and applications.
"With OpenServer 507V, SCO is protecting our customer's investment in their OpenServer applications by extending their life cycle without the need to migrate," said Jeff Hunsaker, president and chief operating officer, SCO Operations. "This provides a superior Total Cost of Ownership to an OpenServer 5 application while at the same time taking advantage of the significant performance gains with new modern hardware. We expect, in the near future, to release virtualized versions for OpenServer 6 and UnixWare 7.1.4 as well."
OpenServer 5.0.7V is released as a Virtual Appliance image that can be easily imported onto VMware ESX 3.5, VMware ESXi 3.5 and VMware Workstation 6.5.2 for Windows((R)) platforms. Importation of the Virtual Appliance usually takes between 10 and 60 minutes to complete, depending on configuration, and configuration of the imported Virtual Appliance takes a further 5-10 minutes. Once installed, the system behaves just like a natively-installed OpenServer 5.0.7 system with all of the latest maintenance installed. For convenience, many of the VMware tools have also been included to improve integration between SCO OpenServer 5.0.7V and the host VMware system.
"Using SCO OpenServer 5.0.7 as a base, SCO Engineering has built an optimized Virtual Appliance for VMware," said Andy Nagle, senior director of development, The SCO Group. "This Virtual Appliance uses a subset of existing and updated device drivers that provides optimal performance in a virtual environment."
For more information about OpenServer 5.0.7V, please visit:http://sco.com/products/unix/virtualization/
"With OpenServer 507V, SCO is protecting our customer's investment in their OpenServer applications by extending their life cycle without the need to migrate," said Jeff Hunsaker, president and chief operating officer, SCO Operations. "This provides a superior Total Cost of Ownership to an OpenServer 5 application while at the same time taking advantage of the significant performance gains with new modern hardware. We expect, in the near future, to release virtualized versions for OpenServer 6 and UnixWare 7.1.4 as well."
OpenServer 5.0.7V is released as a Virtual Appliance image that can be easily imported onto VMware ESX 3.5, VMware ESXi 3.5 and VMware Workstation 6.5.2 for Windows((R)) platforms. Importation of the Virtual Appliance usually takes between 10 and 60 minutes to complete, depending on configuration, and configuration of the imported Virtual Appliance takes a further 5-10 minutes. Once installed, the system behaves just like a natively-installed OpenServer 5.0.7 system with all of the latest maintenance installed. For convenience, many of the VMware tools have also been included to improve integration between SCO OpenServer 5.0.7V and the host VMware system.
"Using SCO OpenServer 5.0.7 as a base, SCO Engineering has built an optimized Virtual Appliance for VMware," said Andy Nagle, senior director of development, The SCO Group. "This Virtual Appliance uses a subset of existing and updated device drivers that provides optimal performance in a virtual environment."
For more information about OpenServer 5.0.7V, please visit:http://sco.com/products/unix/virtualization/
AFORE Unveils Long Distance Virtualization
AFORE Solutions, Inc., today unveiled the first purpose built networking solution for extending virtualization between geographically distributed data centers. Built upon the ASE3300 platform, the Company's new Virtual Fiber and Virtual Wire capabilities enable the migration of Virtual Machines and storage across IP and Ethernet wide area networks. This technology allows enterprises and cloud computing/disaster recovery service providers to establish extended virtual data centers, creating new levels of availability and paving the way for advanced hosting and managed service offerings.
"Enterprises struggle with the high cost and limited availability of dark fiber, yet increasingly need to interconnect data centers within the enterprise or between their data centers and cloud service providers," states Jonathan Reeves, AFORE's Chairman and Chief Strategy Officer. "Our Virtual Fiber and Virtual Wire technology provides a significant advancement for enterprises and cloud computing operators alike enabling data centers to be extended across great distances and bandwidth to be re-allocated on demand to meet changing application requirements."
Ensuring seamless Virtual Machine (VM) migration over a wide area network creates specific challenges. VM migration events require significant bandwidth and resources, with low latency and secure Layer 2 connectivity between hosts. Previous solutions limited wide area connectivity to dark fiber, which may be costly and impractical for a wide range of applications and business models. AFORE's Virtual Fiber technology enables lossless and secure communications over IP and Metro Ethernet wide area networks, while Virtual Wire provides transparent Layer 2 connectivity with end-to-end flow control and dynamic packet re-sizing to adapt data center packet sizes to wide area packet network capabilities as required by FC, FCoE or Jumbo frame based applications. The solution also provides time of day re-allocation of bandwidth, enabling connectivity between sites to be increased or decreased as required.
AFORE will be demonstrating long distance virtualization at VMWorld, booth 1438J, August 31 - September 3, 2009, at the Moscone Center in San Francisco.
Virtual Fiber and Virtual Wire technology are immediately available with AFORE's ASE3300 service delivery platform.
"Enterprises struggle with the high cost and limited availability of dark fiber, yet increasingly need to interconnect data centers within the enterprise or between their data centers and cloud service providers," states Jonathan Reeves, AFORE's Chairman and Chief Strategy Officer. "Our Virtual Fiber and Virtual Wire technology provides a significant advancement for enterprises and cloud computing operators alike enabling data centers to be extended across great distances and bandwidth to be re-allocated on demand to meet changing application requirements."
Ensuring seamless Virtual Machine (VM) migration over a wide area network creates specific challenges. VM migration events require significant bandwidth and resources, with low latency and secure Layer 2 connectivity between hosts. Previous solutions limited wide area connectivity to dark fiber, which may be costly and impractical for a wide range of applications and business models. AFORE's Virtual Fiber technology enables lossless and secure communications over IP and Metro Ethernet wide area networks, while Virtual Wire provides transparent Layer 2 connectivity with end-to-end flow control and dynamic packet re-sizing to adapt data center packet sizes to wide area packet network capabilities as required by FC, FCoE or Jumbo frame based applications. The solution also provides time of day re-allocation of bandwidth, enabling connectivity between sites to be increased or decreased as required.
AFORE will be demonstrating long distance virtualization at VMWorld, booth 1438J, August 31 - September 3, 2009, at the Moscone Center in San Francisco.
Virtual Fiber and Virtual Wire technology are immediately available with AFORE's ASE3300 service delivery platform.
Rackspace Private Cloud Leverages VMware For Enterprise Computing Offering
Rackspace Hosting, has announced its new Private Cloud offering, which allows customers to run the centrally managed VMware virtualisation platform on private dedicated hardware environments.
Rackspace recognises the demand from enterprises for a more flexible and scalable hosting solution. Although multi-tenant cloud solutions are very flexible and cost-effective, they are not always right for every segment. The Rackspace Private Cloud’s single-tenant architecture offers increased control and security, while still maintaining the scalability, flexibility and resource optimisation that make shared cloud offerings so compelling.
Rackspace Private Cloud is an evolution of its popular dedicated virtual server (DVS) offering within the managed hosting business unit. In the last year, revenue from virtualisation solutions has grown substantially, driven mainly by the increased flexibility, improved asset utilisation and lower capital and operating costs that VMware’s virtualisation provides
Rackspace recognises the demand from enterprises for a more flexible and scalable hosting solution. Although multi-tenant cloud solutions are very flexible and cost-effective, they are not always right for every segment. The Rackspace Private Cloud’s single-tenant architecture offers increased control and security, while still maintaining the scalability, flexibility and resource optimisation that make shared cloud offerings so compelling.
Rackspace Private Cloud is an evolution of its popular dedicated virtual server (DVS) offering within the managed hosting business unit. In the last year, revenue from virtualisation solutions has grown substantially, driven mainly by the increased flexibility, improved asset utilisation and lower capital and operating costs that VMware’s virtualisation provides
NetEx Takes HyperIP Virtual with Broad Application Support for WAN Optimization on VMware Infrastructures
NetEx today announced that its HyperIP for VMware offers the broadest range of third-party support for applications. These include all of the leading providers of disaster recovery, data migration and replication software, such as Data Domain, Dell/EqualLogic, EMC, FalconStor, Hewlett-Packard/LeftHand, Hitachi Data Systems, IBM, Microsoft, Network Appliance and many others.
The move by NetEx to virtualize the HyperIP WAN optimization software is part of an industry trend with more companies opting to deploy applications as software-only implementations to take advantage of the cost, scalability and flexibility of the VMware infrastructure. Virtualizing applications for VMware eliminates the need for specialized appliances while allowing IT organizations to quickly re-allocate computing and storage resources as needed to accommodate business priorities.
HyperIP for VMware is the industry’s only software-based WAN optimizer that operates on a VMware ESX server to boost the performance of third-party storage replication applications. Virtual HyperIP mitigates TCP performance issues that are common when moving stored data over wide area network connections because of bandwidth restrictions, latency due to distance and/or router hop counts, packet loss and network errors. HyperIP increases end-to-end performance of replication applications by 3 to 10 times, reducing VMotion and Storage VMotion transfer windows with enhanced efficiency by utilizing 80 to 90 percent of available bandwidth between data centers or branch offices up to OC12 rates.
NetEx was one of the early adopters in recognizing the impact of the virtual infrastructure, how it could benefit IT operations, and speed up data migration and replication operations when combining HyperIP for VMware with data movement applications from top tier IT storage vendors. VMware has enhanced the ESX infrastructure by redesigning the Hypervisor to support multiple cores, opening the way for all applications to be offered as virtualized pure software plays and eliminating the need for expensive appliances and expensive IP network upgrades.
The applications supported by HyperIP for VMware include: DataCore AIM, Data Domain Replicator Software; Avamar, SRDF Adaptive Copy, SRDF/DM, SRDF/A (DMX), Centera Replicator, and Celerra Replicator, RecoverPoint CRR and DL3D from EMC; Dell/EqualLogic PS Series Replication; FalconStor Software’s IPStor, Disksafe and FileSafe; HP/Lefthand Networks SANiQ; TrueCopy for iFCP from HDS; IBM Tivoli Storage Manager and Global Mirror (FCIP), Microsoft NetBios and Data Protection Manager; SnapMirror and SnapVault from NetApp; NSI DoubleTake; DataGuard, DB Rsync and Streams from Oracle; SANRAD Global Data Replication; Softek Replicator; NetBackup, ReplicationExec and Volume Replicator by Symantec; Veeam Replication; and VMware VMotion. In addition, HyperIP fully supports WAN optimization for the industry standard FTP and iSCSI protocols.
The move by NetEx to virtualize the HyperIP WAN optimization software is part of an industry trend with more companies opting to deploy applications as software-only implementations to take advantage of the cost, scalability and flexibility of the VMware infrastructure. Virtualizing applications for VMware eliminates the need for specialized appliances while allowing IT organizations to quickly re-allocate computing and storage resources as needed to accommodate business priorities.
HyperIP for VMware is the industry’s only software-based WAN optimizer that operates on a VMware ESX server to boost the performance of third-party storage replication applications. Virtual HyperIP mitigates TCP performance issues that are common when moving stored data over wide area network connections because of bandwidth restrictions, latency due to distance and/or router hop counts, packet loss and network errors. HyperIP increases end-to-end performance of replication applications by 3 to 10 times, reducing VMotion and Storage VMotion transfer windows with enhanced efficiency by utilizing 80 to 90 percent of available bandwidth between data centers or branch offices up to OC12 rates.
NetEx was one of the early adopters in recognizing the impact of the virtual infrastructure, how it could benefit IT operations, and speed up data migration and replication operations when combining HyperIP for VMware with data movement applications from top tier IT storage vendors. VMware has enhanced the ESX infrastructure by redesigning the Hypervisor to support multiple cores, opening the way for all applications to be offered as virtualized pure software plays and eliminating the need for expensive appliances and expensive IP network upgrades.
The applications supported by HyperIP for VMware include: DataCore AIM, Data Domain Replicator Software; Avamar, SRDF Adaptive Copy, SRDF/DM, SRDF/A (DMX), Centera Replicator, and Celerra Replicator, RecoverPoint CRR and DL3D from EMC; Dell/EqualLogic PS Series Replication; FalconStor Software’s IPStor, Disksafe and FileSafe; HP/Lefthand Networks SANiQ; TrueCopy for iFCP from HDS; IBM Tivoli Storage Manager and Global Mirror (FCIP), Microsoft NetBios and Data Protection Manager; SnapMirror and SnapVault from NetApp; NSI DoubleTake; DataGuard, DB Rsync and Streams from Oracle; SANRAD Global Data Replication; Softek Replicator; NetBackup, ReplicationExec and Volume Replicator by Symantec; Veeam Replication; and VMware VMotion. In addition, HyperIP fully supports WAN optimization for the industry standard FTP and iSCSI protocols.
Wednesday, August 12, 2009
How to Maximize Performance and Utilization of Your Virtual Infrastructure
Most Fortune 1000 companies are currently between 15 to 30 percent virtualized. There are still a lot of obstacles to overcome to move more virtualization projects forward. The biggest virtualization challenge facing organizations is how to manage the virtual infrastructure. Here, Knowledge Center contributor Alex Bakman explains how IT staffs can dramatically improve performance and utilization efficiencies in their virtualization projects.
Organizations today are rapidly virtualizing their infrastructures. In doing so, they are experiencing a whole new set of systems management challenges. These challenges cannot be solved with traditional toolsets in an acceptable timeframe to match the velocity at which organizations are virtualizing. In a virtual server infrastructure where all resources are shared, optimal performance can only be achieved with proactive capacity management and proper allocation of shared resources.
The biggest challenge is finding the vast amount of time or automated technology to do this. Not allocating enough resources can cause bottlenecks in CPU, memory, storage and disk I/O, which can lead to performance problems and costly downtime events. However, over-allocating resources can drive up your cost per virtual machine, making a ROI harder to achieve and halting future projects.
To address this, organizations should consider a life cycle approach to performance assurance in order to proactively prevent performance issues—starting in preproduction and continually monitoring the production environments. By modeling, validating, monitoring, analyzing and charging, the Performance Assurance Lifecycle (PAL) addresses resource allocation and management. It significantly reduces performance problems, ensures optimal performance of the virtual infrastructure and helps organizations to continually meet service-level agreements (SLAs).Resource Library:
The following are the five components of the PAL. These components allow organizations to maximize the performance and utilization of their virtual infrastructures, while streamlining costs and delivering a faster ROI.
Component No. 1: Modeling
Modeling addresses preproduction planning to post-production additions, as well as changes to the virtual infrastructure. With capabilities to quickly model thousands of "what if" scenarios—from adding more virtual machines to changing configuration settings—IT staff can immediately see whether or not resource constraints will be exceeded and if performance issues will occur. In this way, modeling provides proactive prevention.
Four common modeling scenarios are:
1. See the effect on resource capacity and utilization of adding a new host/virtual machine or removing existing ones.
2. What will happen when a host is suspended for maintenance or a virtual machine is powered down?
3. Pre-testing VMotion scenarios to make sure sufficient resources exist.
4. How will performance be affected if resource changes are made to hosts, clusters and/or resource pools?
Component No. 2: Validating
While modeling "what if" scenarios is an important first step to continually ensuring optimal performance, it is equally important to validate that changes will not have a negative impact on infrastructure performance. Resource Library:
Validation spans between the modeling stage and the monitoring stage of the PAL, because it is equally critical to initially validate performance-impacting changes in preproduction, as well as to continually monitor and validate performance over time. If you cannot validate that a certain change will impact infrastructure performance in either a negative or positive way, there is significant risk to making that change.
Component No. 3: Monitoring
The ongoing monitoring of shared resource utilization and capacity is absolutely essential to knowing how the virtual environment will perform. When monitoring resource utilization, IT staff will know whether resources are being over or underutilized. Not allocating enough resources (based on usage patterns and trends derived from 24/7 monitoring) will cause performance bottlenecks, leading to costly downtime and SLA violations. Over-allocating resources can drive up the cost per virtual machine, making a ROI much harder to achieve.
By continually monitoring shared resource utilization and capacity in virtual server environments, IT can significantly reduce the time and cost of identifying current capacity bottlenecks that are causing performance problems, tracking the top resource consumers in your environment, alerting you when capacity utilization trends exceed thresholds, and optimizing performance to meet established SLAs.
Organizations today are rapidly virtualizing their infrastructures. In doing so, they are experiencing a whole new set of systems management challenges. These challenges cannot be solved with traditional toolsets in an acceptable timeframe to match the velocity at which organizations are virtualizing. In a virtual server infrastructure where all resources are shared, optimal performance can only be achieved with proactive capacity management and proper allocation of shared resources.
The biggest challenge is finding the vast amount of time or automated technology to do this. Not allocating enough resources can cause bottlenecks in CPU, memory, storage and disk I/O, which can lead to performance problems and costly downtime events. However, over-allocating resources can drive up your cost per virtual machine, making a ROI harder to achieve and halting future projects.
To address this, organizations should consider a life cycle approach to performance assurance in order to proactively prevent performance issues—starting in preproduction and continually monitoring the production environments. By modeling, validating, monitoring, analyzing and charging, the Performance Assurance Lifecycle (PAL) addresses resource allocation and management. It significantly reduces performance problems, ensures optimal performance of the virtual infrastructure and helps organizations to continually meet service-level agreements (SLAs).Resource Library:
The following are the five components of the PAL. These components allow organizations to maximize the performance and utilization of their virtual infrastructures, while streamlining costs and delivering a faster ROI.
Component No. 1: Modeling
Modeling addresses preproduction planning to post-production additions, as well as changes to the virtual infrastructure. With capabilities to quickly model thousands of "what if" scenarios—from adding more virtual machines to changing configuration settings—IT staff can immediately see whether or not resource constraints will be exceeded and if performance issues will occur. In this way, modeling provides proactive prevention.
Four common modeling scenarios are:
1. See the effect on resource capacity and utilization of adding a new host/virtual machine or removing existing ones.
2. What will happen when a host is suspended for maintenance or a virtual machine is powered down?
3. Pre-testing VMotion scenarios to make sure sufficient resources exist.
4. How will performance be affected if resource changes are made to hosts, clusters and/or resource pools?
Component No. 2: Validating
While modeling "what if" scenarios is an important first step to continually ensuring optimal performance, it is equally important to validate that changes will not have a negative impact on infrastructure performance. Resource Library:
Validation spans between the modeling stage and the monitoring stage of the PAL, because it is equally critical to initially validate performance-impacting changes in preproduction, as well as to continually monitor and validate performance over time. If you cannot validate that a certain change will impact infrastructure performance in either a negative or positive way, there is significant risk to making that change.
Component No. 3: Monitoring
The ongoing monitoring of shared resource utilization and capacity is absolutely essential to knowing how the virtual environment will perform. When monitoring resource utilization, IT staff will know whether resources are being over or underutilized. Not allocating enough resources (based on usage patterns and trends derived from 24/7 monitoring) will cause performance bottlenecks, leading to costly downtime and SLA violations. Over-allocating resources can drive up the cost per virtual machine, making a ROI much harder to achieve.
By continually monitoring shared resource utilization and capacity in virtual server environments, IT can significantly reduce the time and cost of identifying current capacity bottlenecks that are causing performance problems, tracking the top resource consumers in your environment, alerting you when capacity utilization trends exceed thresholds, and optimizing performance to meet established SLAs.
Hyper9 VOS Helps Battle Virtual Machine Sprawl
Hyper9 is rolling out the second version of its flagship Virtualization Optimization Suite, which is designed to give businesses improved insight into their virtualized environments and better ways to manage their VMs. While many businesses have embraced virtualization to save money in such areas as hardware, space and power, the result has been a virtualization environment that is not always easy to manage. Hyper9 VOS offers a host of new features tied together by an intuitive user interface.
Hyper9 officials want to give businesses better insight into their virtual environments.
The company July 29 rolled out the second generation of its flagship Virtualization Optimization Suite—or VOS—which is designed to help businesses create virtual environments that are suitable to their business needs, according to Bill Kennedy, executive vice president of research and development for Hyper9.
Enterprises over the past few years have embraced virtualization with the hope of reducing hardware, space and power costs by moving workloads onto virtual machines, Kennedy said in an interview. However, those same businesses are now finding that costs generated by the “VM sprawl” are going up, causing what Kennedy calls “ROI erosion.”
“It’s become harder to manage [these virtual environments],” he said.Resource Library:
Hyper9’s VOS is designed to give businesses greater insight into those environments, enabling them to not only see what VMs are running what workloads, but also giving them the ability to more easily search, organize and analyze data from the virtual environments. That data is displayed through an intuitive user interface, Kennedy said.
A recent survey of customers by the vendor found that at least 20 percent of existing VMs are superfluous to a company’s operations, which is resulting in businesses spending more money than needed on their virtual environments. Through VOS, businesses can more easily find those underutilized or unneeded VMs.
Hyper9 earlier this year rolled out the first version of its VOS offering, which was primarily aimed at virtualization administrators and offered some data collection capabilities, Kennedy said.
The latest version offers greater business insights and analytics, and is aimed at a wider array of people, including data center administrators as well as virtualization administrators.
A key new feature is Hyper9’s Workspaces, which lets users organize and share content, as well as gain better insight into the virtual machines and how they’re being used, Kennedy said.
Hyper9 also put in a feature called Active Links, which gives users one-click access to everything from data to reports to common tasks.
“You can find rogue VMs [that are not being used or are underutilized] through one click,” he said.
There also is automated monitoring and alerting, which gives users a heads up on such issues as change tracking, rogue VMs and VM sprawl.
Hyper9’s VDMA feature analyzes historical performance and configuration data.
Hyper9 officials want to give businesses better insight into their virtual environments.
The company July 29 rolled out the second generation of its flagship Virtualization Optimization Suite—or VOS—which is designed to help businesses create virtual environments that are suitable to their business needs, according to Bill Kennedy, executive vice president of research and development for Hyper9.
Enterprises over the past few years have embraced virtualization with the hope of reducing hardware, space and power costs by moving workloads onto virtual machines, Kennedy said in an interview. However, those same businesses are now finding that costs generated by the “VM sprawl” are going up, causing what Kennedy calls “ROI erosion.”
“It’s become harder to manage [these virtual environments],” he said.Resource Library:
Hyper9’s VOS is designed to give businesses greater insight into those environments, enabling them to not only see what VMs are running what workloads, but also giving them the ability to more easily search, organize and analyze data from the virtual environments. That data is displayed through an intuitive user interface, Kennedy said.
A recent survey of customers by the vendor found that at least 20 percent of existing VMs are superfluous to a company’s operations, which is resulting in businesses spending more money than needed on their virtual environments. Through VOS, businesses can more easily find those underutilized or unneeded VMs.
Hyper9 earlier this year rolled out the first version of its VOS offering, which was primarily aimed at virtualization administrators and offered some data collection capabilities, Kennedy said.
The latest version offers greater business insights and analytics, and is aimed at a wider array of people, including data center administrators as well as virtualization administrators.
A key new feature is Hyper9’s Workspaces, which lets users organize and share content, as well as gain better insight into the virtual machines and how they’re being used, Kennedy said.
Hyper9 also put in a feature called Active Links, which gives users one-click access to everything from data to reports to common tasks.
“You can find rogue VMs [that are not being used or are underutilized] through one click,” he said.
There also is automated monitoring and alerting, which gives users a heads up on such issues as change tracking, rogue VMs and VM sprawl.
Hyper9’s VDMA feature analyzes historical performance and configuration data.
VM6 Software Releases Virtual Machine ex Server 2.0
A virtualization solution from VM6 Software comes with features like rebuild functionality and improvements in the network components layer.
Virtualization company VM6 Software announced the release of Virtual Machine ex (VM6 VMex) Version 2.0 for remote office and branch locations. VMex leverages Microsoft Hyper-V to create an internal cloud to provision, consolidate, manage and protect all of the ROBO workloads. The company said the solution does not require any specialized skill sets other than Microsoft Certified System Engineers.
New features in Virtual Machine ex 2.0 include monitoring and alert capabilities that are fully integrated into the management console, so administrators can use the predefined templates or build their own to capture errors and write in log files, send e-mails or run a script; advanced security settings systems for administrators to assign delegation to allow users to have read, write or limited access to the various objects in the VMex cloud; and improved performance for virtual shared storage rebuild.
"Enterprise organizations that have realized the benefits of virtualization in the data center are struggling with ways to extend those same benefits to remote locations and branch offices as the costs are too high and the specialized skill sets required are unavailable or cost-prohibitive." said VM6 founder and CEO Claude Goudreault. "Enterprise leaders now seek solutions that make it easier to manage, provision, consolidate and protect the workloads across all of their locations. VM6 VMex addresses the challenges of virtualization adoption in remote office locations, providing an affordable and easy way to create a competitive advantage." Resource Library:
The VMex virtual SAN rebuild function automatically rebuilds a virtual SAN in less than 5 minutes without impacting the performance, the company said, even if the RAID was unavailable or down for up to a week. The solution also boasts reduced setup time with an improved install wizard accelerating the installation of VMex on a two-node cluster, now taking less than 15 minutes.
VM6 has also improved the network components layer. Removal of the dependency to PGM and the addition of VMex proprietary network drivers eliminated the stress on the Windows kernel, adding to performance and stability, the company claims. Rounding out the features is integrated quota management and thin provisioning, where VMex administrators can provision more storage than is physically available and set proper quota alerts to prevent over allocation of physical resources.
Christian Boivin, R D director at JLR Real Estate Data Builders, said the company has been using VM6 VMex 1.0 since it became available and is pleased to see this latest version, specifically for its integrated monitoring and alerting.
"As a search engine for real estate and property information, it's critical that our IT infrastructure be robust and available at all times, while being flexible as we're essentially transforming the mission of our servers between day and night," he said. "When we looked at available solutions in the market, they were all at least five times more expensive and required a lot of independently developed solutions to work together, which further added to the complexity.”
Virtualization company VM6 Software announced the release of Virtual Machine ex (VM6 VMex) Version 2.0 for remote office and branch locations. VMex leverages Microsoft Hyper-V to create an internal cloud to provision, consolidate, manage and protect all of the ROBO workloads. The company said the solution does not require any specialized skill sets other than Microsoft Certified System Engineers.
New features in Virtual Machine ex 2.0 include monitoring and alert capabilities that are fully integrated into the management console, so administrators can use the predefined templates or build their own to capture errors and write in log files, send e-mails or run a script; advanced security settings systems for administrators to assign delegation to allow users to have read, write or limited access to the various objects in the VMex cloud; and improved performance for virtual shared storage rebuild.
"Enterprise organizations that have realized the benefits of virtualization in the data center are struggling with ways to extend those same benefits to remote locations and branch offices as the costs are too high and the specialized skill sets required are unavailable or cost-prohibitive." said VM6 founder and CEO Claude Goudreault. "Enterprise leaders now seek solutions that make it easier to manage, provision, consolidate and protect the workloads across all of their locations. VM6 VMex addresses the challenges of virtualization adoption in remote office locations, providing an affordable and easy way to create a competitive advantage." Resource Library:
The VMex virtual SAN rebuild function automatically rebuilds a virtual SAN in less than 5 minutes without impacting the performance, the company said, even if the RAID was unavailable or down for up to a week. The solution also boasts reduced setup time with an improved install wizard accelerating the installation of VMex on a two-node cluster, now taking less than 15 minutes.
VM6 has also improved the network components layer. Removal of the dependency to PGM and the addition of VMex proprietary network drivers eliminated the stress on the Windows kernel, adding to performance and stability, the company claims. Rounding out the features is integrated quota management and thin provisioning, where VMex administrators can provision more storage than is physically available and set proper quota alerts to prevent over allocation of physical resources.
Christian Boivin, R D director at JLR Real Estate Data Builders, said the company has been using VM6 VMex 1.0 since it became available and is pleased to see this latest version, specifically for its integrated monitoring and alerting.
"As a search engine for real estate and property information, it's critical that our IT infrastructure be robust and available at all times, while being flexible as we're essentially transforming the mission of our servers between day and night," he said. "When we looked at available solutions in the market, they were all at least five times more expensive and required a lot of independently developed solutions to work together, which further added to the complexity.”
Do Hyper-V's Improvements Make It a Stronger VMware Rival?
Hyper-V, part of the Windows Server 2008 R2 platform, provides some improvements that were absolutely necessary for Microsoft to even think of competing with VMware's latest offerings. Are they enough? eWEEK Labs' early look at the new Hyper-V shows that Microsoft still has a lot of ground to cover.
Microsoft released Windows Server 2008 R2 with a newly improved version of Hyper-V. Even so, VMware is still miles ahead in terms of the features and innovation that lay the foundation for sustainable virtualization for midsize and large enterprises.
In fact, I think VMware—with its just-released vSphere 4—has raised the bar so high that Microsoft's best hope is to be the low-cost leader. But while cheap, "You get what you pay for" products might work in a consumer category, they won't play too well in IT shops that depend on high-performance data operations to stay in business.
That said, here's what's new and compelling in Hyper-V.
The previous version of Hyper-V had Quick Migration to move virtual machines from one physical host to another. Now, Quick Migration is gone and Live Migration is here.Resource Library:
Click here for a look at improvements in the Hyper-V implementation.
In the weeks ahead, I'll be conducting extensive Live Migration tests on the Labs' Hewlett-Packard and Sun Xeon 5500 ("Nehalem")-based systems. But, for now, let's just say that Quick Migration was so inferior to VMware's VMotion that Microsoft had to shore up this function in Hyper-V.
I suspect that Live Migration has some catching up to do with similar VMware features that have been in field use for several years. When it comes to failover, high availability and load balancing, there is no substitute for production experience. This is one area in which cheap and OK is trumped by market-priced and reliable.
Cluster Shared Volumes are also improved in this version of Hyper-V and play an important role in making VMs highly available. The fact that these clustering enhancements support Live Migration makes them important, but they are no means innovative.
Included among the improvements is a best-practices tool to help ensure proper system configuration. I'm anxious to get started putting a clustered Hyper-V environment together here in the lab. I'll be making extensive use of this tool to see how helpful it is in putting my storage and computing resources into correct alignment.
Microsoft does have a leg up on VMware in at least one area.
Sometime in the next couple of months, Microsoft will release the next version of its System Center Virtual Machine Manager. Microsoft has years of experience in managing large numbers of Windows systems, as well as an almost equal number of years in working with third-party tool makers. Even though most of Microsoft's management experience is with Microsoft-only tools, this could be the edge it needs to win over the virtualization hearts and minds of IT managers, who will soon be measured on how well they manage their virtualized data centers (if they aren't already).
Look for my review of Hyper-V as part of eWEEK Labs' extensive coverage of the Windows Server 2008 R2 platform and Windows 7.
Technical Director Cameron Sturdevant can be reached at csturdevant@eweek.com.
Microsoft released Windows Server 2008 R2 with a newly improved version of Hyper-V. Even so, VMware is still miles ahead in terms of the features and innovation that lay the foundation for sustainable virtualization for midsize and large enterprises.
In fact, I think VMware—with its just-released vSphere 4—has raised the bar so high that Microsoft's best hope is to be the low-cost leader. But while cheap, "You get what you pay for" products might work in a consumer category, they won't play too well in IT shops that depend on high-performance data operations to stay in business.
That said, here's what's new and compelling in Hyper-V.
The previous version of Hyper-V had Quick Migration to move virtual machines from one physical host to another. Now, Quick Migration is gone and Live Migration is here.Resource Library:
Click here for a look at improvements in the Hyper-V implementation.
In the weeks ahead, I'll be conducting extensive Live Migration tests on the Labs' Hewlett-Packard and Sun Xeon 5500 ("Nehalem")-based systems. But, for now, let's just say that Quick Migration was so inferior to VMware's VMotion that Microsoft had to shore up this function in Hyper-V.
I suspect that Live Migration has some catching up to do with similar VMware features that have been in field use for several years. When it comes to failover, high availability and load balancing, there is no substitute for production experience. This is one area in which cheap and OK is trumped by market-priced and reliable.
Cluster Shared Volumes are also improved in this version of Hyper-V and play an important role in making VMs highly available. The fact that these clustering enhancements support Live Migration makes them important, but they are no means innovative.
Included among the improvements is a best-practices tool to help ensure proper system configuration. I'm anxious to get started putting a clustered Hyper-V environment together here in the lab. I'll be making extensive use of this tool to see how helpful it is in putting my storage and computing resources into correct alignment.
Microsoft does have a leg up on VMware in at least one area.
Sometime in the next couple of months, Microsoft will release the next version of its System Center Virtual Machine Manager. Microsoft has years of experience in managing large numbers of Windows systems, as well as an almost equal number of years in working with third-party tool makers. Even though most of Microsoft's management experience is with Microsoft-only tools, this could be the edge it needs to win over the virtualization hearts and minds of IT managers, who will soon be measured on how well they manage their virtualized data centers (if they aren't already).
Look for my review of Hyper-V as part of eWEEK Labs' extensive coverage of the Windows Server 2008 R2 platform and Windows 7.
Technical Director Cameron Sturdevant can be reached at csturdevant@eweek.com.
Microsoft Azure Pulling Out of Northwest Due to Taxes
Microsoft Azure, its public cloud-based developer platform, will no longer distribute applications to developers from its northwestern hub, as a "change in local tax laws" compels Microsoft to migrate those applications to other geographies prior to the service’s commercial launch in November. Microsoft hopes that it can seize market-share in the cloud-based services space by persuading developers to adopt the platform quickly.
Microsoft's Azure, its public cloud-based developer platform, will soon offer developers one less geographical region from which to run their applications. Azure relies on a worldwide network of distributed data centers to deliver SAAS (software as a service) to users.
"Due to a change in local tax laws, we’ve decided to migrate Windows Azure applications out of our northwest data center prior to our commercial launch this November," announced an Aug. 4 posting on the official Windows Azure blog. "This means that all applications and storage accounts in the 'USA – Northwest' region will need to move to another region in the next few months, or they will be deleted."
The posting added: "Around the time that the 'USA – Northwest' option is removed, we will also provide an automated tool available on the Windows Azure portal to migrate projects." An e-mail will be sent to CTP participants when the tool becomes available on that as-yet-unannounced date.
In addition to "USA – Northwest," Azure offers "USA – Southwest" and "USA – Anywhere" as geographies from which users can run Azure applications. Microsoft plans on adding further geographies at an undetermined time in the future. Resource Library:
Microsoft announced at its Worldwide Partner Conference in July that Azure would be available for free until this year’s Professional Developers Conference in November. After that point, customers will have three different options for paying for the service: a pay-as-you-go model, a subscription format or via volume licensing.
For all three types of service, users will pay 10 cents per gigabyte for incoming data, and 15 cents for outgoing data. The "consumption" model will cost 12 cents per hour for infrastructure usage, and another 15 cents per gigabyte for storage. The business edition of the SQL Azure database will cost $99.99.
Microsoft’s initial price cuts are designed to build market momentum for the platform, which will face competition from similar cloud-based offerings by Amazon.com and Google. Doug Hauger, general manager of Microsoft Azure, told an audience at the Worldwide Partner Conference that Microsoft would offer discounts for partners, as well as allow partners to charge customers for applications and services built using the platform.
Azure will allow developers to "deliver solutions very, very quickly," Hauger added at the time.
Azure and other cloud-based services still face structural issues, such as unexpected downtime, but analysts also feel that their presence could ease many IT professionals’ reservations about running parts of their operations in a public cloud.
Microsoft has no plans to make Azure available to run in an enterprise’s private cloud. Even so, some of Azure’s functionality, including the ability to boot from a VHD (virtual hard disk) has been integrated into Windows Server 2008.
Microsoft's Azure, its public cloud-based developer platform, will soon offer developers one less geographical region from which to run their applications. Azure relies on a worldwide network of distributed data centers to deliver SAAS (software as a service) to users.
"Due to a change in local tax laws, we’ve decided to migrate Windows Azure applications out of our northwest data center prior to our commercial launch this November," announced an Aug. 4 posting on the official Windows Azure blog. "This means that all applications and storage accounts in the 'USA – Northwest' region will need to move to another region in the next few months, or they will be deleted."
The posting added: "Around the time that the 'USA – Northwest' option is removed, we will also provide an automated tool available on the Windows Azure portal to migrate projects." An e-mail will be sent to CTP participants when the tool becomes available on that as-yet-unannounced date.
In addition to "USA – Northwest," Azure offers "USA – Southwest" and "USA – Anywhere" as geographies from which users can run Azure applications. Microsoft plans on adding further geographies at an undetermined time in the future. Resource Library:
Microsoft announced at its Worldwide Partner Conference in July that Azure would be available for free until this year’s Professional Developers Conference in November. After that point, customers will have three different options for paying for the service: a pay-as-you-go model, a subscription format or via volume licensing.
For all three types of service, users will pay 10 cents per gigabyte for incoming data, and 15 cents for outgoing data. The "consumption" model will cost 12 cents per hour for infrastructure usage, and another 15 cents per gigabyte for storage. The business edition of the SQL Azure database will cost $99.99.
Microsoft’s initial price cuts are designed to build market momentum for the platform, which will face competition from similar cloud-based offerings by Amazon.com and Google. Doug Hauger, general manager of Microsoft Azure, told an audience at the Worldwide Partner Conference that Microsoft would offer discounts for partners, as well as allow partners to charge customers for applications and services built using the platform.
Azure will allow developers to "deliver solutions very, very quickly," Hauger added at the time.
Azure and other cloud-based services still face structural issues, such as unexpected downtime, but analysts also feel that their presence could ease many IT professionals’ reservations about running parts of their operations in a public cloud.
Microsoft has no plans to make Azure available to run in an enterprise’s private cloud. Even so, some of Azure’s functionality, including the ability to boot from a VHD (virtual hard disk) has been integrated into Windows Server 2008.
Subscribe to:
Posts (Atom)