Virtualization: Myths vs. Facts

Cost/ROI

In this section, we'll focus on seven different myths commonly heard relative to either the cost of getting into virtualization or the costs once there. We'll discuss each and why each should not be a major consideration in deciding on a virtualization strategy.

Myth 1: Virtualization is too expensive

Many people look at the cost of virtualization software, training, and the cost of shared storage and think that virtualization is too expensive. While all these things cost money, there are ways to save on each, and the simple fact that you will require far fewer servers (saving Capital Expenditure (CapEx) – budgetary dollars) will save a lot – usually more than the extra costs just listed. Add to that the savings in Operational Expenditures (OpEx), the day-to-day costs of heating, cooling, managing, etc., and the savings can really add up. Each major vendor in virtualization has a free product offering to get you started, and there are several options for creating shared storage out of local disks, such as VMware's vSphere Storage Appliance (VSA). As for the costs of training, there are computer based training (CBT) options, books, blogs, etc., that can fill in the gap if instructor-led training isn't an option. They can also be used to supplement the things covered in a real or virtual classroom. For all but the smallest of companies, virtualization saves money.

Myth 2: It is only for large companies

Another common misconception is that virtualization is only for large companies with extensive staff, shared storage, thousands of servers, etc. In fact, virtualization can be helpful to companies of almost all sizes. Unless you only have one server, virtualization can decrease the number of servers needed in the organization, along with the decrease in OpEx associated with fewer servers in use. Couple these savings with some of the other methods of reducing costs listed in the previous myth and it is hard for many businesses not to virtualize.

Myth 3: The reported consolidation ratios are too high (vendors are inflating what we can expect to actually get)

When consolidation ratios (the number of physical servers that can be consolidated on a single server running a virtualization platform) are discussed, many assume that the numbers reported by vendors of 20:1 or even 15:1 are overly optimistic. They may or may not be, depending on the workloads that need to be virtualized. In fact, in some cases they are lower than what you will see in your environment. The numbers reported are similar to the stickers on new cars advertising a certain gas mileage or on a new appliance estimating the annual cost of operating the appliance. In all these cases, they are averages, or values calculated based on a specific test under specified conditions. The key idea here is that even if the consolidation ratio were just 2:1, that would still cut the number of servers in use in half (and remember the associated operating costs will be cut roughly in half as well). In fact, for extremely large workloads, there may even be a 1:1 ratio, but even then, the advantages of virtualization (such as being able to restart the application quickly after a hardware failure, having fewer passive standby servers laying around in case a failure occurs, being able to restart the VM at a remote location in case the primary site fails, etc.) may still make the effort worth the cost. The issue of large workloads and virtualization will be discussed further in the performance section.

Myth 4: Consolidation ratios are the most important factor in determining success

The idea that the consolidation ratio expected is a driver for determining if a particular virtualization project makes sense or not is just silly. Obviously, larger ratios will save more money, but even small ratios will lead to savings, and why pass up any savings in today's economic environment?

Myth 5: Virtualization leads to server sprawl

This is one of the few myths that actually has some merit. Virtualization can lead to sprawl if not properly managed. It is easy to create additional VMs, and that can quickly get out of control without some Information Lifecycle Management (ILM) policies. To help combat this problem, there are tools from many third parties that look for these VMs and provide reports on utilization, last time they were powered on, etc. In some cases, it is even possible to place an expiration date on the VM, after which it will stop working (and possibly even be deleted from disk) unless the date is changed. On the other hand, it may be useful to keep some of these VMs in inexpensive storage, powered off to prevent them from consuming CPU and memory resources. This enables them to be brought online quickly if needed in the future. Examples of this could be in a Test/Dev environment or for technical support on older products.

Myth 6: It creates licensing problems

Virtualization in and of itself does not create any licensing problems; on the other hand, it may not solve them either. When planning for licensing, keep the following licenses / support issues in mind:

  • The cost of maintenance on the hardware (which actually will go down as there are fewer pieces of hardware to manage)
  • The cost of the virtualization platform
  • The cost of the management platform for the virtualization platform (such as VMware's vCenter or Microsoft's SCVMM – System Center Virtual Machine Manager)
  • The cost of the OS in each VM (or if it is Linux, at least the cost of support for that VM). In this regard, as in the next item, if a Windows platform is used, the Microsoft Virtualization Calculator can help determine the least expensive way to license Windows and stay legal. Several calculators are available for different platforms and applications; for example:- http://www.microsoft.com/windowsserver2003/howtobuy/licensing/calc_2.htm (for Windows 2003) and - http://www.microsoft.com/windowsserver2008/en/us/server-calculator/default.aspx (for Windows 2008 R2).

The simplest thing is to simply search the Web for "Microsoft Virtualization Calculator" and use that tool. The cost may actually go down in a virtualized environment. For example, with Windows Datacenter, an unlimited number of Windows Datacenter VMs can be run on each physical host, so simply count the hosts and license for each one, and you're all set. And it's much simpler than figuring out which edition of each you have installed on each server and possibly much cheaper too!).

The cost of any applications (from SQL or Exchange to Office or WinZip) installed in the VM. This is typically the same as in the physical world, though not always. Check with the application vendor on their requirements (for example, SQL 2012 is licensed per physical core for physical server or per virtual core for VMs – small- to medium-sized SQL servers may actually be less expensive with this model).

You may also want to look at a third-party application that will track and manage licenses for you to lower this administrative burden and the associated cost.

Myth 7: Virtualization is difficult to learn and support

Virtualization is another skill set, much like networking or storage, and thus has a learning curve all its own. Those who do best in the virtualization world usually have some experience as server, storage, and/or network administrators, as they already understand many of the concepts implemented in virtualization. Nevertheless, anyone who wants to put forth the necessary effort can learn virtualization and excel at it. Once in place, the ongoing maintenance and support is not overly difficult, especially as the vendors all have management suites to assist in this area, and many third-party vendors offer additional capabilities and/or add-on products to further simplify the process.

Performance

In this section, we'll focus on four issues that are commonly expressed relative to performance, either in general or for resource-intensive (and usually business-critical) applications and debunk each one.

Myth 1: Poor performance

One of the most common refrains heard in relation to virtualization is the cost to performance of virtualizing. In almost all cases, servers today have so much horsepower that a single application on a physical server can't exploit all the capability; thus, much of that power is wasted. In fact, average utilization of servers today is typically in the 5 – 10% range, leaving a lot of room for consolidation and while maintaining headroom for the virtualization platform itself.

It is still possible to encounter poor performance, but it is usually a byproduct of one of the following:

  • Failing to take into account the aggregate load of all the VMs running on a single server, especially at peak times. This is most common in 1 Gbps network deployments and around storage performance, especially when many VMs are booting at the same time (such as after a power failure), a condition known as a boot storm.
  • Failing to configure the host hardware properly, such as not evenly dividing the RAM across NUMA Nodes in servers that support NUMA or failing to enable Jumbo Frames when doing IP-based storage.
  • Failing to configure the VM properly; for example, giving it more CPUs than it needs, causing scheduling contention with other VMs; choosing the wrong HAL (uniprocessor vs. multiprocessor); or giving multiple CPUs to a VM that has an application that is single-threaded and thus can't use the extra processing power.

In short, almost anything can be virtualized. For more on this see Myth Number 3, Virtualizing Tier 1 Applications, in this section.

Myth 2: Another layer will slow down apps

This myth is closely related to the previous one. It only makes sense that virtualization will add another layer to the processing stack and that extra overhead must have a measurable cost. In fact, there is extra overhead (especially around memory and, to a lesser degree, around the CPU), but in a properly designed and sized environment, there should be sufficient headroom that the impact on performance will be negligible, and often completely unnoticed. In addition, with each release of every vendor's virtualization platform, they expend a great deal of energy to optimize the utilization of the hardware and to minimize the impact on performance of the platform itself. This is helped by the hardware vendors who also work to optimize their platforms (especially the CPU platforms) for use in a virtualized environment.

Myth 3: Tier-1 applications (SQL, Exchange, Oracle, SAP, etc.) can't be virtualized

This is a common myth that has been around for years – virtualization is fine for small VMs and in Test/Dev labs, but it will never be able to be used in demanding production environments, such as SQL, Oracle, Exchange, or SAP. The truth is, even these loads can almost always be successfully virtualized. Last year, VMware published a white paper detailing how they were able to get over one million Input/Output Operations Per Second IOPS from a single server and over 300,000 IOPS from a single VM. Each vendor also has best practice documents on how to optimize these mission-critical applications on their platforms. These optimizations should be implemented in addition to those recommended by the application vendor. VMs today can often have many CPUs (up to 32 in vSphere 5, for example) and 1 TB of RAM (in vSphere 5 and Hyper-V, for example), so the scalability limitations of years past are no longer limitations today. When virtualizing these kinds of applications, keep the following best practices in mind:

  • Be sure that any new servers you purchase support the latest hardware virtualization technologies, including Hardware-assisted Memory Management Unit (HWMMU), available on Intel Nehalem (Core i) processors and later, and called Intel Extended Page Tables (EPT); as well as AMD Shanghai processors and later, called Rapid Virtualization Indexing (RVI).
  • Enable NUMA and Hyperthreading in the BIOS.
  • Disable any power management features in the BIOS as this can slow things down and that is exactly what we are trying to avoid.
  • Buy larger systems – you should always have more cores than virtual CPUs (vCPUs) in a VM.
  • Use the latest version of the hyper visor to get the latest updates from the vendor.

In short, for the most demanding loads, buy the best equipment. It's not that much different from doing things in the real world.

Myth 4: Lengthy recovery times after a failure occurs

In fact, just the opposite is true. When a physical server crashes, the components need to be repaired or replaced, then the OS reinstalled (usually), then the last backup restored. With a VM, if the server crashes, there is no reinstall time; just restart it on a different server. If the VM gets damaged or deleted, redeploy the basic VM from a template and restore the data. This can be done very quickly, depending on the quantity of data that needs to be restored. Thus, this actually becomes a reason to virtualize, not a reason not to.

Other Concerns

In this final section, we'll look at four other concerns that may be an issue for various administrators, supervisors, etc. These issues may be concerns for some organizations, but most will find that they are not real issues once properly understood.

Myth 1: A virtualized server is less secure than a physical one

A virtualized server is as secure as a physical server with the same configuration, if not more so. In fact, many organizations with high security needs, such as the US Defense Department use virtualization extensively as a method to reduce costs. For some guidance, check out http://iase.disa.mil/stigs/index.html on unclassified information and how they lock down various operating systems and virtual platforms. Vendors also often offer their own security course offerings on how to manage and secure their platforms.

Myth 2: Applications won't migrate to or run properly in VMs

While it is possible that an application wouldn't run properly in a VM, it is extremely unlikely. Things that may have trouble are those that require access to specialized cards in the server or physical ports (such as serial or parallel ports) that may not be configured in a virtual environment. Other than those few exceptions, the operating system doesn't know it is in a VM, and thus any applications that run on the OS aren't aware either.

What may be a bigger issue for some companies is support. Most vendors today support their applications in a virtual environment, but a few (at the time of the writing of this paper, Oracle is a big one in this category) require you to prove to them that the issue is not virtualization-related, not the other way around. That having been said, most virtualization vendors promise support for these kinds of applications as a way to reduce the concern in that area.

Myth 3: Virtualization complicates system management

All of the major virtualization vendors have their own management platforms (such as vCenter for VMware or SCVMM for Hyper-V) and many have additional add-on applications that can dig deeper for even greater integration and management. Microsoft has other applications in the System Center family of applications; VMware has Operations Manager, etc.

If the above vendor-supplied tools do not do all that you'd like, there are a wealth of third-party products from major and minor vendors (such as vkernel and Quest) that provide additional solutions to these issues.

Myth 4: Virtualization is only for servers, not desktops and other devices

Virtualization typically begins with server virtualization because it is fairly easy and the savings are huge, but that does not mean that desktops and other devices – even mobile phones! – can't be virtualized as well.

VMware currently has a mobile phone virtualization platform and some phone vendors currently sell the phone with the platform preinstalled. The idea is that the user buys his/her own phone (or gets one supplied by the company) and that the user's personal data, applications, contacts, etc., are separated from the managed corporate side of the phone with their own applications and contacts, as well as the ability to remotely wipe the corporate data and do other management tasks.

Most vendors also have a desktop virtualization solution (XenDesktop from Citrix or View from VMware, for example) that allows IT to virtualize all of the desktops in an organization and then use thin clients (usually a very basic PC with no moving parts – they PXE boot from the network, have a USB keyboard and mouse, and may even get the necessary power from the network cable) to access the central PCs. This will not save much, if any, money on CapEx. In fact, CapEx may even go up as servers and storage may need to be purchased for the project, not to mention thin clients if they will be replacing the existing computer equipment. Over the long term, however, CapEx will be saved as these thin clients have an average life span of 6 to 8 years vs. 3 to 4 years for a typical PC, they have few moving parts and thus rarely fail, and they consume much less power (and generate a lot less heat, reducing the A/C required to cool all those desktops). The bigger savings, however, is typically in OpEx, where most of the cost of a desktop lies. Fewer help desk and IT personnel are required to maintain thin clients, and if one fails, it can be replaced by any other one. The desktop is secured in the data center and backed up per corporate policy, etc.

The advantages of a virtual desktop strategy were covered in another Global Knowledge white paper titled "Top 10 Ways to Save with Desktop Virtualization" and thus won't be discussed here.

About the Author

John Hales, VCP, VCAP, VCI, is a VMware instructor at Global Knowledge, teaching all of the vSphere and View classes that Global Knowledge offers. John is also the author of many books (including a book on vSphere 5: Professional vSphere 5: Implementation and Management), from involved technical books from Sybex, to exam preparation books, to many quick reference guides from BarCharts, in addition to custom courseware for individual customers. John has various certifications, including the VMware VCP (3 and 4, and 5), VCAP, and VCI, the Microsoft MCSE, MCDBA, MOUS, and MCT, the EMC EMCSA (Storage Administrator for EMC Clariion SANs), and the CompTIA A+, Network+, and CTT+. John lives with his wife and children in Sunrise, Florida.