Have concerns about data loss with write caching products been addressed? Or is data loss still possible if the cache fails?
Write caching improves performance by acknowledging to the application that a write has been successfully stored when it is in the cache storage area instead of in the hard disk storage area. This means there is a period of time that if the cache fails for some reason, data loss will occur. Failure can occur in two situations: the flash module being used for cache storage fails or the server itself fails.
Failure of the flash module storing cache data is a real concern if the cache turnover rate is too high. This means that the system constantly has to update and replace data being stored in cache. To protect against this, users should implement a larger cache and they should mirror the cache drive in the server. Ideally, that mirroring should be done by the cache software itself, which provides seamless operation in the event of a cache failure.
None of the above steps can protect from a server failure. To protect against this, the caching software should have the ability to extend that cache to a shared flash device on the storage network. Then inbound data can be written to both devices for reliability, but read from the server side instance for performance. If the server fails, the cache software needs to have the capability to understand that a failure event occurred and check the shared copy first on restart.
Comparing write-through, write-back and write-around caching
George Crump of Storage Switzerland compares three common types of caching -- write-through, write-back and write-around.
Martin: Using SSD as
In this segment of his Storage Decisions presentation, Dennis Martin of Demartek discusses the benefits of using SSD as cache.
Why is persistent
Leah Schoeb, partner with Evaluator Group, discusses the importance of persistent cache in this Expert Answer.
This was first published in February 2014