Automated tiering has become a popular offering among most major storage vendors, because typically 20% of the data on the system accounts for 80% of the activity. The ability to automatically and intelligently move part of a LUN or volume
In these systems, data is first written to the highest performance tier and as it becomes less active it migrates to tiers that are lower in cost and needed performance. If activity increases, pages are automatically promoted back to a higher performance tier. Sub-LUN auto-tiering architectures demonstrate a highly efficient use of storage resources and improved performance. This can reduce cost since fewer higher performance HDDs are required, SSD effectiveness is maximized, data classification requirements are eliminated, and power, space, and cooling requirements are reduced.
Where and how should automated tiering be applied in an environment?
The nice thing about sub-LUN automated tiering storage products is they can benefit most random intensive environments. These products work well in OLTP (online transaction processing) database environments where a small part (<10%) of the data set is usually frequently accessed. Assuming the percentage of the database being frequently accessed fits inside the high-performance media capacity, performance can be greatly increased. This allows them to handle more transactions with much lower response times -- not to be confused with putting redo logs, temp tables and indexes on solid-state storage, which also increase database performance.
Virtual desktop infrastructures also benefit from sub-LUN auto-tiering storage products, especially for cloning when the gold master is kept and accessed in high-performance storage. Server virtualization is a highly intensive random workload that has shown higher performance for the overall virtual infrastructure when consolidated on auto-tiering storage products.
Optimal amount of data per page
Does sub-LUN page size really matter, and is there an optimal chunk or block size to use? The benefit of using sub-LUN auto tiering comes from offering a more granular control of data movement from tier to tier. Whether it is 15 MB or 1 GB, the performance, overhead and cost of this data movement seems to depend on the workload presented to the storage.
One of the main differences in this level of granularity is how much overhead capacity this will take up in more expensive, high performance media. The optimal page size depends on the workload. A small block random workload (<65 KB) would move less overhead between tiers with a small block size than when using a larger page size, like 1 GB. A large block random workload may minimize its overhead from a larger sub-LUN page size and may keep down metadata management overhead.
Why is managing metadata overhead important? Each page is going to have its associated metadata. The smaller the size, the more metadata there is to manage. In addition, how automated tiered storage software stores and manage this metadata can be more important for performance and efficiency than the actual page size itself. And, especially if it is storage in memory, performance can suffer as the amount of metadata grows.
About the author:
Leah Schoeb is a senior partner at Boulder, Colo.-based Evaluator Group.
This was first published in February 2013