SSD storage and virtual desktop infrastructure (VDI) initiatives are two of the most prominent trends in the storage industry right now. Somehow, the two have been lumped together in the collective psyche so that there’s
VMs and VDI strain storage performance
Before focusing on VDI specifically, we should note that storage performance is a widely recognized challenge stemming from deployments of server virtualization technology in general. As such, it’s not surprising that 2011 Enterprise Strategy Group research showed that more than one-third (38%) of current solid-state storage users identified the alleviation of I/O bottlenecks caused by server virtualization as the primary reason for their organization’s initial deployment of solid-state storage. Similarly, more than half of potential adopters (59%) divulged that it was at least one of the reasons for deploying or considering solid-state storage. Whether it is a virtual desktop or a virtual server, the same basic storage challenges apply: Virtualization stresses storage from both a capacity and a performance perspective.
SSD plus VDI: Five key questions
How do you determine how much solid-state storage you’ll need to support a VDI implementation?
Actually, there’s nothing particularly awkward about this from a capacity perspective. It’s straightforward math -- how many desktops do you have and what’s the size of each one? Don’t forget to consider how many different desktop type images you’ll need and -- because everything goes at the speed of the slowest element -- how many copies of the master desktop(s) you will need to boot from so that your available bandwidth gets everything completed in time. No magic, just physics.
There is one potential hitch to check: Determining the functionality of your hypervisor or management software. Does your application allow you to use a single gold/master image? Not all are as flexible as you might expect, so it’s worth asking:
- Can you replace the OS from under a user’s config files?
- Can you decouple the desktop OS that’s going to be booted from that user’s files?
If you’re unlucky, or if the math and physics don’t work out, then your available bandwidth might be a constraint that no amount of solid-state storage can fix.
Aside from the capacity aspect, should you implement your solid-state storage as cache or as persistent storage?
Frankly, a lot of this depends on your budget. If you live in a “performance at any cost” world, then there isn’t an issue -- buy oodles of solid-state. For the rest of us, the use of an expensive resource such as solid-state storage needs to be optimized.
The jobs that are typically associated with solid-state in VDI environments are boot storms and virus scans. Both are read-heavy operations, so they’re well-suited to solid-state implemented as cache, where it can help smooth out the impacts of peak demands.
But if you need to address latency issues with regular workloads, then you’ll usually have more success implementing solid-state storage as a tier. (Mixed or write-intensive workloads will need the immediate persistence of a tier to invariably write the data all the way through to a protected storage layer.) Of course, the downside of a tier is that you’ll likely need some form of redundancy. Whichever route you go, the question is cost vs. benefit. Achieving a steady state of good boots plus low latency is not a sport for wimps and, as with all things in storage, it pretty quickly gets back to money.
Should the solid-state you use be a mixed hard disk drive (HDD)/flash solution or flash-only?
The glib answer to this is to know what you need and what you can afford. While some all-flash arrays use compression and dedupe to get close to HDD costs for performance apps, the overall economic attraction of a mixed storage environment cannot be argued. That said, you can get a fair few golden images and VDI boots run off just a few terabytes of server-based (and non-controller constrained) solid-state PCIe cards. Just remember that you may need a way to ensure availability and data access across servers, either via a software tool or via a physical copy. However, let’s assume that you’re constrained to a limited amount of solid-state and that you naturally want to get the biggest operational bang for your buck.
Can you configure VDI and SSD so that VDI gets to use it for particular occurrences (like boot storms) and then it’s available to other applications when it’s not needed by the VDI?
The quick answer is yes, but this is a key area to discuss with both the VDI and storage vendors. Don’t assume anything -- you need to ask. There are some very sophisticated tiering options available now. While the marketing materials may seem the same, look for something that allows you to do time-based “pinning” of certain data at certain times (e.g., all boot data is kept in solid state between certain hours), offers some level of quality of service or has near-real-time tiering to ensure optimum responsiveness. Once past the boot storm, you really don’t want that data on solid-state storage, but remember that a) you can’t split OS files to say some get hit on boot and some don’t; and b) the granularity of a hypervisor is a LUN. In terms of fine tuning, the optimum approach is to find an automated tool that delivers your prescribed QoS as closely as possible.
What other benefits are there for VDI from using solid-state storage?
Beyond the boot storm issue, solid state can help with virus scans and general latency. It can also help pack storage in a limited floor, power and cooling footprint, delivering a physically and economically consolidated storage back-end to match the consolidation effort of the VDI itself.
There’s a lot of “specsmanship” in the solid-state world, so be very clear on what you need and want to achieve; then look for a proof-of-concept and/or guarantees.
Mark Peters is a senior analyst at the Enterprise Strategy Group focused on storage systems.
This was first published in March 2012