There's a disconnect in your data center. You've virtualized your applications, and your teams are thinking and working in virtual machines (VMs). But your storage hasn't materially changed in decades — it still uses the same constructs (such as logical unit numbers, or LUNs) that were introduced for physical server workloads during an era that has long since passed. That disconnect between virtualized applications and conventional storage is costing your business time and money.
Many storage innovations are happening, and you might be exploring or deploying some of them to address the storage pain in your data center. But it doesn't matter if those products aren't VM-aware. A big claim? Yes, but it's an important concept to understand.
Only VM-aware storage (VAS) is specifically built for virtualized applications, stripping out the complexity of physical storage management so that you can focus on the applications that matter most for your business. With VAS, you can manage storage functions on individual applications, saving you time, money — and sanity.
It's been said that most assumptions have outlived their uselessness, but I'll assume a few things nonetheless! Mainly, I assume that:
This chapter gets you up to speed on virtualization technology, and it helps you understand the challenges and limitations of conventional storage for the virtual and cloud environments.
In case you've spent more time keeping up with the Kardashians than keeping up on technology, let me quickly bring you up to date: Virtualization has been the hottest technology over the past decade!
Before you dive headlong into VM-aware storage, let's cover a few basic storage terms and concepts:
A hypervisor is virtualization software that runs between the hardware and the operating system(s) running virtual machine (VM) workload(s). The hypervisor enables multiple independent VMs or applications to run on a single physical server, by abstracting the resources (processors, memory, networking, and storage) of the physical server.
Virtualization has transformed the modern data center and is a key enabling technology for the cloud environments. In the not-too-distant past, individual enterprise applications and databases were installed on dedicated physical servers. But deploying an application on its own physical server hardware has some important technical limitations beyond cost effectiveness that negatively affect business agility and efficiency, including:
Virtualization enables server consolidation, which greatly improves application scalability and data center resource utilization. VMs can be quickly provisioned for new or upgraded applications and easily migrated to different physical servers and geographic locations. A single physical server can run multiple VMs and virtualized applications to better utilize resources.
Cloud computing extends virtualization benefits and enables businesses to deploy virtualized applications at massive scale (see Figure 1-1).
Figure 1-1: Physical, virtual, and cloud computing architectures.
The increased use of server virtualization in data center environments has dramatically improved the efficiency of many IT processes, including data storage provisioning.
Throughout this book, I look at various attempts by storage vendors to place conventional filesystem or block storage in various parts of the data center architecture in an effort to share the storage pool more effectively for virtualized applications. However, conventional approaches weren't designed for virtualization and have proven to be a forced fit at best for today's highly virtualized data centers.
While DAS‐ and SAN‐based storage platforms present data as blocks, and NAS‐based storage manages data as files (which are closest to the nature of VMs, or VMDK files, but still not VM‐aware), all three have no concept of virtual machines, creating various layers of hardware and management complexity.
What these conventional storage architectures lack is a more granular and flexible unit of management for storage. In order to effectively manage virtualized workloads, you need a VM-aware storage solution or filesystem.
Conventional storage architectures (see the figure) include:
In a physical storage architecture, it's straightforward and simple to map storage in a 1:1 relationship with physical servers hosting individual applications (see Figure 1-2).
Figure 1-2: Physical applications mapped to physical storage in a 1:1 relationship.
Then virtualization comes along, which enables multiple VMs and applications to share a common storage pool. Conventional storage platforms were built and deployed before virtualization became a defining characteristic in the modern data centers, and thus weren't designed to support virtualized workloads. Conventional storage architectures have limitations that restrict their effectiveness in virtualized environments, including:
Figure 1-3: Mapping virtualized application VMs to conventional LUN-based storage is complex.
The lack of visibility created by the I/O "blender effect" makes understanding and troubleshooting storage capacity and performance issues extremely challenging (like trying to put the ingredients of a blended fruit smoothie back together to figure out if it was the strawberries, bananas, mangos, or jalapenos that didn't quite taste right).
For example, in some instances, virtualization enables workloads to scale up or down automatically, based on demand, but most conventional storage technologies require manual intervention to assign the corresponding storage capacity before the workload can be started.
Prepare for the Cloud Now!
In As You Like It, Shakespeare extols the virtues of moderation, asking "Can one desire too much of a good thing?" But we're talking about server virtualization — not chocolate or exercise — and Shakespeare didn't know squat about server virtualization! So if your organization has embraced virtualization, how do you get ready for virtualization at scale? By deploying a cloud‐ready infrastructure.
And this is when things get complicated with conventional storage and its associated manual and error‐prone storage management processes. These storage management challenges, which you may have painstakingly lived with in smaller virtualized environments, become insurmountable with virtualization at scale. Again, Shakespeare didn't understand the management complexity of virtualization at scale when he wrote "All the world's a stage ... and one man in his time plays many parts." Many parts yes, but several thousand parts (or manual storage management processes)? Get real! These "parts" include:
In this chapter, you learn how storage has changed in today's virtualized world, what VM‐aware storage is, how it works, how it solves modern storage challenges, and how it addresses conventional storage limitations for the virtual and cloud environments.
Understanding Changing Storage Requirements
Before applications were virtualized, tuning storage to support different applications was a complex, but well understood, process. Generally speaking, each application ran on a dedicated server and on storage hardware with capacity and performance characteristics matched to each application's unique requirements. However, this approach was inefficient.
With virtualization, multiple applications running as VMs share storage, making it (virtually) impossible to specifically allocate dedicated storage hardware to meet the required capacity and performance characteristics of each application. However, the need for "dedicated" storage for specific applications still exists — you still need to match the right storage performance characteristics to each individual application. With conventional storage, this requires constant tuning of logical unit numbers (LUNs) and volumes that contain numerous VM workloads — all with different storage performance requirements.
Table 2‐1 gives a few examples of the varying storage performance requirements that might exist in different application use cases.
Random access perfor-mance
Backing up critical data files
Finding a record in a database
Starting up many virtual desktops
Table 2-1 Examples of storage performance requirements for different application use cases
Other potential storage characteristics to consider for different applications include:
A valid question you should be asking now is: How can you possibly match the different characteristics and requirements in conventional storage?
VM‐aware storage restores simplicity to the storage system in virtualized and cloud environments by getting rid of LUNs and other conventional storage constructs on the back end. VM‐aware storage also abstracts the complexity of redundant array of independent disk (RAID) configurations under the hood from the daily tasks of storage administrators. With VM‐aware storage, you can focus on applications or VMs and re‐create a virtual 1:1 mapping between your applications and storage — the I/O "un‐blender" (see Figure 2‐1)!
Figure 2-1: VM-aware storage maps storage performance characteristics to virtualized application VMs in a 1:1 relationship.
VM-aware storage has built-in intelligence about the virtualized applications that use it. This intelligence can include a number of factors, such as an understanding of:
VM-aware storage works with the virtualization infrastructure to understand the performance needs of individual virtualized applications and make the storage system more responsive to those needs.
VM-aware storage eliminates complexity in storage management tasks by acting as a dedicated resource manager, delivering the right level of performance to each virtualized application.
VM-aware storage is like being able to observe all vehicles traveling through a particular town every day and knowing where they started from and where they are going, as well as the individual driving habits and abilities of each driver. Using this information, you can organize the traffic light patterns across town to make each vehicle's journey flow smoothly, and you can change the direction of reversible lanes during rush hour — virtually eliminating traffic jams forever!
VM-aware storage optimizes how data is stored, based on its understanding of virtualized applications and their requirements. It uses storage technologies, such as deduplication and compression, to improve storage efficiency, and quality of service (QoS) policies to meet different application performance requirements. These capabilities enable you to independently manage and optimize storage for individual applications.
VM-aware storage creates virtual disks (vDisks), which a VM uses for its operating system, application software, and other data files. A vDisk hides the physical storage resources from the VM's operating system in much the same way that a hypervisor hides the physical server resources from the VM's operating system. Regardless of the type of physical storage device, the vDisk always appears to the VM as local storage.
At this point you may be thinking, "Wait, VM-aware storage does all of this without LUNs?" Yes! Read the sidebar "Look Mom, no LUNs!" to learn how.
Remember the thrill you used to get from causing panic and terror in your mother when you rode your bicycle without holding the handlebars? Now, you can relive those carefree days and create that same feeling of panic and terror in your storage administrators — without the reck-lessness and danger — when you deploy VM-aware storage and yell, "Look Mom, no LUNs" (although it may be a bit awkward calling your storage administrator "Mom")!
How can you get away without LUNs? In conventional storage systems, storage performance characteristics and storage operations are assigned and executed at the LUN or volume level (see the figure). Because numerous VMs and applications with different performance and data protection requirements are placed on a single LUN or volume, conventional storage lacks the ability to provide granular control of important features and functions.
Conventional storage uses LUNs and volumes to define a filesystem, and applies RAID configurations at that level.
The lack of granularity in the filesystem of conventional storage systems creates various storage management issues, including:
VM-aware storage defines the file-system at the VM-level and provides a thin/thick provisioning layer to abstract the complexity of how and where data is stored (see the figure). The unit of management is simply the VM, and storage is treated as a datastore with thin/thick provisioning.
VM-aware storage removes the complexity of LUNs and RAID configuration and defines the filesystem characteristics at the individual VM level.
VM-aware storage enables numerous storage management advantages over conventional storage, including:
If you aren't convinced yet about the power of doing away with LUNs, think about replication as an example.
In conventional storage platforms, storage operations such as replication and snapshotting are inefficient because they must be performed at the volume or LUN level. Figure 2-2 illustrates an example of a production VM (VM1) that an organization needs to replicate to a remote site. On a conventional storage system (the top part of the figure), other VMs located on the same LUN (VMs 2 through 5) are also replicated to a remote site over the wide area network (WAN), wasting processing power to perform the cloning and replication, time, bandwidth, and storage capacity. In contrast, on a VM-aware storage system (the bottom part of the figure), only VM1 is replicated across the WAN, hence no hitchhikers.
Figure 2-2: Replication over WAN in conventional storage performed at the LUN level versus VM-aware storage at the VM level.
Because the filesystem in VM-aware storage is defined at the individual VM level, storage operations — such as snapshotting and replication — can be performed at the individual VM or application level. This results in more granular and efficient operations.
VM-aware storage solves modern storage challenges in virtualized and cloud environments (discussed in Chapter 1), including:
Accurately predicting how much data storage capacity your organization will need in the next three months is incredibly challenging, let alone over the next three to five years — you'd probably have better luck forecasting the weather over the next three months! Further complicating this challenge is the fact that storage technology changes as often as, well — the weather. This dilemma often results in organizations adding too much complexity and creating too many islands of storage that don't integrate with other solutions in your data center, hence having limited scalability.
VM-aware storage leverages a scale-out architecture that enables you to optimize the placement of workloads over time, scaling both storage capacity and performance independently. New storage appliances are added into a datastore pool so that the collective VMs and applications can access the right storage without requiring reconfiguration.
The I/O blender effect (discussed in Chapter 1) creates a blind spot in virtualized applications running on conventional storage. The lack of visibility makes troubleshooting a time-consuming, costly, and frustrating trial-and-error process that leaves storage administrators guessing about the root cause of storage performance issues, while business users are left unproductive and idling.
VM-aware storage uses a model, built around the requirements of the applications that use it, to shape an appropriate set of storage resources (such as flash, compression, deduplication, and backup and recovery procedures). Hence, it can help to build the usage model and validate that the storage system is meeting the needs of the virtualized applications.
VM-aware storage gathers and presents information at the individual VM level. It tracks all the characteristics of resident VMs and can correlate with the rest of the virtualized infrastructure (host, network, and storage) in real-time. Thus, administrators are better able to spot issues and resolve performance problems in much less time than with conventional storage systems.
In conventional storage systems, over-provisioning is a common practice to ensure that applications always have sufficient storage capacity and performance. This well-intentioned but misguided practice causes under-utilization of storage capacity and limited ability to guarantee performance.
VM-aware storage intelligently utilizes several technologies at a granular VM level, as opposed to using a "shotgun" approach on a storage LUN, including:
In conventional storage, manual processes define how you manage data (the control plane) and where that data physically resides (the data plane). Adding or removing resources requires many repetitive "re-wire" steps, and you have to tell the operating system and installed application(s) the precise characteristics of the underlying storage.
In software-defined storage architectures, the control plane is built into software to abstract many complex manual storage processes into software-based solutions.
VM-aware storage uses many of the same principles as software-defined storage, but takes them a step further. For example, the storage system's control plane can use information gleaned from the VMs. Based on a VM reporting certain threshold values (such as performance, available capacity, and latency), it might trigger automated provisioning or make more resources available.
Many storage vendors today claim that their storage products are "VM-aware." But what, exactly, makes VM-aware storage "VM aware"? The five requirements for VM-aware storage to truly be VM-aware are:
Many storage vendors are jumping on the VM-aware storage (VAS) bandwagon and now tout their products as "VM-aware," "VM-centric," or some other marketing catchphrase. But it takes more than a creative label for a storage platform to truly be VM-aware. It's fairly simple to tell when a storage product isn't truly VM-aware. Simply answer the following question:
Are there any LUNs or volumes in the storage architecture?
If the answer is YES, do not pass "Go," do not collect $200. You're looking at conventional storage — it is not VM-aware, no matter what the label says!
If the answer is NO, it may be VM-aware storage. A further test kit is:
Do they meet the five requirements of VM-aware storage?
A VM-level filesystem is the first requirement of truly VM-aware storage, followed by VM-level storage operations, VM-level automation, VM-level QoS, and VM-level analytics.
Achieving any of these requirements is not easy because it requires a ground-up redesign instead of retrofitting on conventional architecture.
Don't worry if you can't readily recall the five requirements of VM-aware storage. I call them out for you as I cover them throughout this book!
In this chapter, you explore the business benefits of VM-aware storage features and capabilities that enable a broader virtualization and cloud strategy for your enterprise.
Flash storage technology, in general, provides significantly higher performance and reliability than mechanical hard disk drive (HDD) technology (see the "Storage 101" sidebar in Chapter 1). However, flash in and of itself can't solve all of an organization's storage performance and capacity woes.
Flash technology is often used with other technologies (see Figure 3-1) within storage (both conventional and VM-aware) and IT architectures to improve performance, including:
Figure 3-1: Flash technology is used with other technologies, such as inline deduplication and compression, to improve storage performance.
Using flash storage technology in conventional storage that isn't VM-aware is like driving a Formula One racing car without a steering wheel. Flash storage technology runs into the same visibility, management complexity, and scalability issues as HDD storage technologies — they're just pushed out to a later time, at best. And when you get stuck in a traffic jam (with other "noisy" VMs that happen to be on the same logical unit number, or LUN), the extra speed you hope to get from your Formula One car won't get you there any faster.
Not all applications are created equal. Some applications are business critical and must always perform consistently. For cloud service providers, service‐level agreements (SLAs) often define penalties for failure to deliver a specified level of performance to your customers. For enterprises, downtime means lost business or costly errors in critical databases.
Conventional storage defines quality of service (QoS) policies at the LUN/volume level. Each LUN/volume may have dozens or hundreds of VMs with very different performance requirements, but you must assign the same QoS level to all the VMs within the LUN/volume. The result is a first‐in/first‐out (FIFO) funnel effect that creates I/O traffic jams, as shown on the left side in Figure 3‐2. A single VM can effectively "block" all other VMs because QoS can't be applied at a level granular enough to control "noisy" applications and guarantee appropriate performance levels.
Figure 3-2: Conventional storage systems lack granular, VM-level QoS control, which
As shown on the right side in Figure 3‐2, VM‐aware storage allows you to create granular QoS policies at the VM level. Each individual VM or vDisk gets its own "lane" with the appropriate "width" (corresponding to QoS level) to isolate performance so that there are no conflicts with noisy neighbors and performance levels are guaranteed.
VM‐aware storage enables organizations to define and enforce QoS policies that prioritize individual application and VM workloads, in order to guarantee minimum (or maximum) levels of performance. For example:
VM‐aware storage is designed to automate many of the manual tasks needed to define, manage, and maintain QoS at an individual VM (or groups of VMs) level, including:
One of the five key requirements of VM‐aware storage (see the spot guide at the end of Chapter 5) is VM‐level QoS.
One of the fundamental benefits of virtualization is the ability to simplify the creation, deployment, and ongoing management of virtualized applications with software‐based tools. Virtualization platform vendors such as VMware, Microsoft, Red Hat, and Citrix provide great tools to monitor, manage, and troubleshoot virtual environments. However, without the same capability on the critical data storage layer underlying these environments, you only have partial visibility, and hence, only a partial toolkit.
Many conventional storage systems provide performance averages and correlations at a LUN or volume level, but such information is not helpful for managing and troubleshooting performance in individual VMs and applications, as shown in the top diagram in Figure 3‐3.
Figure 3-3: Conventional storage analytics can only provide averages and correlations, rather than the actionable insights available in VM‐level analytics.
VM‐level analytics take the guesswork out of troubleshooting and fine‐tuning storage performance in virtualized environments. VM‐aware storage provides real‐time measurements and actionable insight, including input/output operations per second (IOPS), throughput (in Mbps), and latency at a per‐VM level.
The difference between conventional storage correlations and VM‐level analytics can be easily understood with the following analogy: If you want to know the temperature in a room of your house, do you analyze complex modeling of other people's home temperatures and global weather trends to get an average? No, you simply read a thermometer in that room. VM‐level analytics provide direct data from the VMs. Even if you can get the average, it is a few hours old at best, which doesn't meet your need for real‐time information.
VM‐aware storage provides the following VM‐level analytics benefits over conventional storage platforms:
When future storage growth predictions are based not only on your VM's storage growth, but also on real‐time performance and I/O trends, you can eliminate performance bottlenecks without piling on unnecessary resources.
One of the five key requirements of VM‐aware storage (see the spot guide at the end of Chapter 5) is VM‐level analytics.
Managing a few VMs across a couple of storage arrays might seem relatively simple. But as your business grows, so does your storage footprint and the complexity associated with monitoring and managing your storage environment.
Consider the following example: You have a few hundred production VMs on a conventional LUN‐based storage platform with custom protection and QoS policies. You need to upgrade your storage hardware and migrate your VMs to the new storage array, requiring you to re‐create every LUN with the exact same settings, move the right VMs to the right LUNs, and individually verify that each VM is working correctly on the upgraded platform.
With a VM‐aware storage solution, all you need to do is migrate the VMs to the new array and you're finished. This is possible because data protection and QoS policies move with the VMs. VM‐aware storage identifies all the VMs in their new home and reapplies the policies.
One of the five key requirements of VM‐aware storage (see the spot guide at the end of Chapter 5) is a VM‐level filesystem.
So what can you achieve with the ability to move VMs with their policies intact? You can optimize placement of hundreds or thousands of VMs across storage platforms.
Getting the LUN settings or VM placement wrong in conventional storage can negatively affect critical business operations. Storage administrators tend to over-provision or carve out a LUN for the VMs to share, regardless of the storage requirements. You wish you guessed right and didn't have to move them again, which usually doesn't happen.
Regardless of how your business is growing, application requirements are constantly changing (often beyond your control, such as in the case of hosted workloads). Thus, VMs need the flexibility to move around within your storage array. But you shouldn't have to worry about what underlying storage fits each VM workload best. With VM‐aware storage, you can treat all your storage resources as a single datastore pool (see Figure 3‐4) and simply move some workloads off an overloaded storage array to another. Or, you can confidently predict that in order to support another 100 virtual desktop infrastructure (VDI) sessions in the next week on the existing array, you need to move the SQL server VMs to another array or add a new array to the pool.
Figure 3-4: Manage workload optimization across datastore pools from a single pane of glass.
In conventional storage, VMs are randomly distributed across different storage platforms, creating performance hotspots and bottlenecks in some storage resources, while other storage resources are under-utilized (see the top illustration in Figure 3‐5). This situation is partly caused by "guessing wrong" during the initial planning cycle when assigning VMs and virtualized applications to LUNs or volumes, as well as a lack of visibility into which workloads are growing faster than anticipated once they are deployed.
In contrast, VM‐aware storage provides a datastore pool with a clear global view of capacity, performance, and flash usage across all your storage resources in a single management tool. You (or the storage software) can intelligently distribute workloads across the storage infrastructure. Rather than setting an arbitrarily low threshold for moving VMs before a move is really needed, you can make such decisions based on actual VM‐level data about growth and usage.
The recommended solution, based on VM storage usage analysis, is to distribute VMs evenly among datastores to avoid hotspots (see the bottom illustration in Figure 3‐5). However, if the numerous VMs on the "busy" datastore are small and do not require many storage resources such as performance and flash, VM‐aware storage does not blindly move these VMs simply for the sake of even distribution. It makes a smart choice to leave the VMs where they are. This approach ensures that the optimization incurs the least cost in terms of the time and space required to migrate VMs.
Figure 3-5: Comparing management complexity and scalability in conventional and VM-aware storage.
Conventional storage technologies create complexity, cost, and performance issues when used with virtualization. This chapter examines some real-world use cases to help you understand how VM-aware storage addresses these issues.
Before virtualization, business-critical applications, such as customer relationship management (CRM), enterprise resource planning (ERP), and online transaction processing (OLTP)/SAP/Oracle databases were deployed on dedicated hardware, thus creating many isolated islands of hardware that were dedicated to running these applications.
Today, these same applications are being virtualized, and in some cases are being delivered as cloud-based software-as-a-service (SaaS) offerings. With new multi-core hosts and powerful virtualization technology, IT no longer has to choose between performance and the benefits of virtualization. Now they can be achieved together. By virtualizing business critical applications, you can achieve greater flexibility and reliability with scale up/down compute capacity, distributed resource scheduling, and automated migration and geo-distribution for business continuity and disaster recovery scenarios.
Unfortunately, many of these newly virtualized business-critical applications are attached to conventional storage. Conventional storage doesn't effectively isolate the I/O performance of one VM from another, resulting in blended performance as multiple VMs contend for shared storage resources that were once dedicated resources, before the application was virtualized. Conventional storage is not designed to deal with the unique performance demands of hosting hundreds, or thousands, of VMs.
What these business critical applications require is performance isolation and quality of service (QoS). VM-aware storage is designed to observe the performance profile of the application running in the VM. Using this technology, the VM-aware storage can separate the I/O from each VM, create separate performance queues, and ensure that each VM gets access to the performance and response times that it requires.
One of the five key requirements of VM-aware storage (see the spot guide at the end of Chapter 5) is VM-level QoS.
A semiconductor design innovator designs and builds graphics and computing technologies powering a variety of solutions, including PCs, game consoles, and servers.
The conventional storage deployed to support virtualized environments could not meet performance requirements and made scaling for remote sites difficult. Things on the manufacture's wish list are: a pod-based infrastructure that could be replicated at remote sites; a flash-based storage system that would satisfy the performance needs of the integrated-circuit testing databases and reduce the storage footprint in their virtualized environments; and visibility into VM performance metrics at the storage layer.
During a storage refresh for the virtualized environments, the manufacturer discovered Tintri. Impressed by the performance in the Tintri VMstore appliance delivered in a small form factor and its VM-aware functionality, they did a proof-of-concept (POC). Upon successfully concluding the POC, they deployed the Tintri VMstore storage appliance in a pod-based architecture.
VM-aware storage constantly monitors individual VMs. It calculates and automatically assigns the appropriate capacity and performance levels from a pool of underlying storage hardware. VM-aware storage effectively abstracts the complexity of disk management away from the application and simply presents a pool of capacity that system administrators can define, based on required performance metrics set in software.
Many businesses have transformed parts of their IT operations to the "as a service" model in an on-premises private or hosted cloud environment. This transformation enables organizations to delay or avoid costly investments in specialized infrastructure. In this way, organizations realize the benefits of agility, responsiveness, and scale that virtualized applications can deliver for their business.
A good analogy for comparing traditional IT infrastructure to cloud deployment models is buying and running your own power generator to supply your home's electricity needs. You constantly run your generator at full capacity because you can't store electricity, but you always have to be ready for demand spikes (like when a Doctor Who marathon event unexpectedly airs on TV). Or, you could simply buy as much electricity (and enough raw Artron Energy to power your TARDIS) as you need from a local utility, when you need it.
Conventional storage technologies aren't designed for fast-changing cloud environments, with numerous internal (on-premises private cloud) or external (hosted cloud) customers constantly creating and tearing down applications, and with different resource requirements on a shared storage pool. (Someone inevitably does a cannonball in a crowded pool of small children playing Marco Polo!)
For example, if one customer needs lots of high-performance reads for a database VM while another customer is running a VM with a lot of write operations, such as data backup, conventional storage technologies struggle to cope with these disparate use cases. Performance suffers for all customers.
The management and maintenance overhead of conventional storage can limit the extreme scale required in a cloud environment. Conventional storage offers some automation capability for storage operations and administration, but because it occurs at the LUN or volume level, there is always a risk that automated processes can cause unintended performance consequences in other VMs or applications on the same LUN or volume.
With VM-aware storage, on-premises private and hosted cloud operators can deliver important services to their customers. Quality of services (QoS) can vary between the various VMs to ensure appropriate service-level agreements (SLAs), such as performance and data protection. VM-aware storage allows each VM to be assigned a performance target, which is then automatically delivered through the dynamic use of flash and storage tiering.
Automated QoS policies in VM-aware storage provide each VM with its own I/O "lane." As conditions change, the storage adapts, automatically giving each VM the appropriate level of performance (or width of "lanes"). VM-level backup and replication technologies are also built directly into the VM-aware storage architecture, which simplifies design and allows for self-service recovery tools.
Compared to conventional storage, VM-aware storage is designed to handle a much larger number of objects (such as VMs, vDisks, snapshots, and the like) and VM-level visibility makes those objects simpler to manage, allowing you to meet cloud-scale requirements far more easily.
Finally, in a VM-aware storage system, automation of administrative tasks, such as provisioning and tearing down virtualized workloads, can be set up at the individual VM level. This capability enables operational efficiencies and massive scalability throughout the organization and for the cloud.
One of the five key requirements of VM-aware storage (see the spot guide at the end of Chapter 5) is VM-level automation.
A cloud services provider had been leveraging conventional storage arrays with SAS drives for its cloud services, but realized that its existing storage environment would not support the expected growth and required performance for the provider's new desktop-as-a-service (DaaS) offering.
The company identified several hard requirements, including:
The IT team identified five storage solutions to test, and ultimately selected a Tintri VMstore to replace an all-flash array from the incumbent storage vendor. The Tintri solution provided the best combination of storage capacity, performance, and cost to meet the service provider's requirements.
A virtual desktop infrastructure (VDI) replaces traditional desktop PCs with a virtual desktop interface that is accessed across a network (or Internet) connection, and actually runs as a VM in the data center.
The advantages of VDI are greatly amplified when a large number of end-users require desktop access, such as in colleges, hospitals, libraries, and call centers, or in situations where users are geographically dispersed (for example, home offices or remote branch offices with little or no local IT support).
The maintenance of hardware, software, security, and data management in a VDI environment can all be centralized for more efficient and secure administration. For example, security updates can be installed on a single VDI image that is used by everyone, instead of being installed individually on hundreds or thousands of desktop PCs.
With VDI, users can securely access their virtual desktop and applications from a variety of endpoints, such as laptop or desktop PCs, tablets, and smartphones. By enabling greater mobility and enterprise "bring your own device" (BYOD) policies, VDI helps to promote flexible workplace practices and drive greater productivity.
VDI places extreme demands on conventional storage, including:
A VM-aware storage layer is well suited to VDI deployments in several ways:
The combination of VM-aware flash storage and data optimization techniques provides a huge performance boost that negates the issues caused by highly dynamic VDI workloads while achieving high storage efficiency.
An independent, private university with a very small technology budget and IT staff supports 350 faculty and staff, 1,000 undergraduate students, and up to 3,000 graduate students at the main campus and two satellite campus locations. Although the first phase of the virtual desktop project was successful, performance was an issue for the persistent desktop deployment.
The team decided to test six different storage platforms to find the best solution for the VMware desktops. In addition to conventional SAN storage, they looked at pure flash offerings, but none of those offerings were feasible with the university's limited IT budget. The Tintri solution outperformed all other vendors' products in the POC, leading the university to purchase a single Tintri VMstore for its VMware VDI environment.
Companies seeking to gain a competitive edge or to solve challenging problems with technology often devote significant engineering resources to developing and testing application software. The speed with which these applications can be developed and tested greatly determines the productivity of the engineering team and the value of the application software. An application that is late to market has no chance of creating a competitive advantage.
Development teams increase their productivity by rapidly prototyping and testing new application software against a wide range of deployment scenarios, production workloads, and potential failure modes. Having large numbers of test and development environments available to development teams can significantly reduce the time required to deliver new applications, but require significant capital expenditure, setup, and resource contention management.
Virtualization enables rapid prototyping and large-scale simultaneous testing in support of multiple concurrent projects. Unfortunately, conventional storage platforms don't have the flexibility needed to build virtual test and development environments. Although fine for a single static application, conventional storage suffers from scalability issues, performance bottlenecks, and management complexity when used to support the dynamic creation and teardown of thousands of VMs in large-scale test and development environments. Conventional storage requires extensive setup, configuration, and maintenance work by software, virtualization, and storage teams, effectively limiting the organization's ability to conduct rapid prototyping and testing.
One of the five key requirements of VM-aware storage (see the spot guide at the end of Chapter 5) is VM-level storage operations.
A complete DevOps solution combines server virtualization and VM-aware storage. A VM-aware storage solution is designed to support VMs as objects, with features that are particularly well-suited to test and development environments, including:
A European Union (EU) bank launched its enterprise-wide virtualization project beginning with test and development systems. The physical environment was running on conventional storage that was not capable of meeting the increased performance demands of a virtualized environment.
After looking at different storage options, the bank purchased six Tintri VMstores and Tintri Global Center for its virtual test and development environment.
There's a disconnect in your data center. Your physical-era storage isn't built to effectively support the virtualized applications that matter to your business. Don't despair — a new generation of VM-aware storage is built specifically for virtualization and cloud.
Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including Toyota, United Healthcare, NASA, and seven of the Fortune 15 have said "No to LUNs." With Tintri they manage only virtual machines, in a fraction of the footprint and at far lower cost than conventional storage. Tintri offers them the choice of all-flash or hybrid-flash platform, converged or stand-alone structure, and any hypervisor. Rather than obsess with storage, leaders focus on the business applications that drive value — and that requires that they keep storage simple.