Are there specific concerns with using server-side caching in virtual server environments?
Yes, there are several. First, how does server-side caching impact virtual machine migration? If a read-only server-side cache is being used with a shared storage array, there is limited risk of data loss. The challenge is when the virtual machine is migrated; its data needs to be re-qualified on the target server. Users will see hard drive performance until the target server can qualify that virtual machine's data. If the cache is a write cache, then the cache needs to be flushed to the shared storage system prior to the virtual machine migration.
An increasing number of caching software suppliers are integrating with the hypervisors to manage through these issues. Many now integrate to acknowledge the migration event and take appropriate action, such as flushing the cache.
In addition, some caching vendors can mirror the cache to a shared SSD on the network. During normal operation, reads are served from the cache in the server, but if there is a server-side cache failure or if the virtual machine is migrated, all reads can come from the shared SSD on the network. This is also an ideal configuration for caching writes, thanks to the redundancy the mirror provides.
Some vendors are also providing the ability to move the cache metadata between systems. When a migration event for a VM is triggered, its cache metadata is transferred to the target server. While the data does need to be reloaded from the shared storage system, the target cache knows exactly which data to get; no data analysis needs to take place.
This was first published in February 2014