Flash options in the array vs. server
A comprehensive collection of articles, videos and more, hand-picked by our editors
What is distributed server-side caching and why is it important?
Distributed server-side caching allows a number of servers to access SSD resources, typically running a hypervisor such as VMware, Xen or Hyper-V. This aggregation is then presented as a shared cache to the entire hypervisor cluster. Like other caches, active data is stored in this flash pool.
It is important for a number of reasons. First, it allows servers in the cluster without SSDs to access the shared pool of solid state storage. This saves money because every server does not have to be populated with SSDs. Second, it makes write caching safer because the shared pool can be made redundant through mirroring and RAID across multiple nodes. A flash drive or even an entire host could fail and all data would still be accessible. Third, a distributed server-side cache does not break virtual machine migration. Since the cache is globally available, virtual machines can migrate between hosts as they normally do.
The technology does have some downsides, however. Most notably, it introduces latency since the aggregation occurs by networking all the server nodes. This also means that extra attention, and potentially investment, has to be made in the server network to ensure performance.
Related Q&A from George Crump
George Crump of Storage Switzerland offers insight on finding the best choice for backing up Active Directory in this Expert Answer.continue reading
George Crump of Storage Switzerland discusses whether specialized tools are necessary to back up Linux environments.continue reading
George Crump of Storage Switzerland discusses how data loss issues can be mitigated in write caching products.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.