What is the difference between SSD caching and tiering?
Tiering for SSDs refers to what I call “primary storage,” where the user decides what data to place on the SSD and when to place it there. Tiering can be performed manually or with automated tiering software on the host or in the storage controller. Tiering is all about moving specific hot data to the SSD tier at the right time and then moving it to a slower tier, also at the right time. If tiering is performed manually, then the administrator must observe the I/O activity over time and decide when to move certain files or data. If the tiering is performed by automated tiering software, then the data movement occurs at a scheduled time, based on policies set by the administrator. Tiering only benefits the applications whose data is moved to the faster tier, but the performance boost is immediate.
SSD caching is determined by host software or the storage controller, but it places a copy of the data into the SSD cache, while not moving the data from its original location known to the users and applications. SSD caching is relatively simple to manage, because nearly all the decisions are made by the caching software or controller. SSD caching benefits any application whose data is considered “hot,” but the performance improvement is a bit more gradual, increasing as more data is placed into the cache. This gradual performance improvement is called “warm-up” or “ramp-up” and can occur over minutes or hours, depending on the implementation and the number of I/O operations occurring. Caching can be read-only or for reads and writes, depending on the implementation.
This was first published in January 2012