Automated tiering software elevates “hot” data to the highest I/O tier, which can be made up of solid-state storage. That software is crucial to the optimization of an SSD tier and makes the difference between Tier 0 and just passive extended cache. But the process of how tiering software actually moves data will rarely be the deciding factor in purchasing an SSD tier. Instead, most organizations will choose Tier 0 devices either because they fit into their vendor strategy or because the device is best-in-class for the intended purpose.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Nevertheless, knowing how a product works based on its tiering software functionality will help storage managers match it up with the data access characteristics of specific applications. There are two primary factors that determine the efficacy of tiering software: The size of data blocks moved and the frequency of movement of data.
As with most technology alternatives, there is no single best answer for the balance between frequency and size of movement. More frequent data movement will be more reactive to sudden “hot” data, elevating it in a real-time manner. However, the more frequently data is moved, the more it utilizes I/Os that could otherwise be allocated to application data service. Constant data movement could result in unproductive thrashing. So, while tiering software may be constantly monitoring data access patterns, most only move data at various intervals. For example, XIOtech’s Continuous Adaptive Data Movement moves data between tiers every 15 seconds whereas IBM’s Easy Tier makes adjustments no more than every five minutes.
The size of the data block determines how finely Tier 0 data can be tuned. Smaller data blocks, which may be as small as 4 KB as in the case of NetApp Flash Cache, are perhaps best adapted to numerous small files, such as in file serving environments. Frequent small block movement can continuously tweak the Tier 0 for full optimization. Large data blocks are more suited to database tables, where related data is likely to be simultaneously in demand. This may also apply to large files such as music files or video files. Products that move up to 1 GB at a time, such as IBM Easy Tier, may be better suited to these types of workloads. EMC FAST can be set from 768 KB blocks up to 1 GB, as determined by the system. Although FAST can be “set and forget,” users wishing to manually tune the tier movement may do so. HP’s 3PAR arrays can move “chunks” of data ranging from 32 MB up to 1 GB.
If administrators had to make the decisions needed to optimize Tier 0 data movement size and frequency, the task would be daunting, indeed. Fortunately, vendors include monitoring software that can track data access by block and make an appropriate determination. Nevertheless, administrators should be aware of how their chosen solution works in order to predict performance and deploy it accordingly. Vendors strive to make their SSD software optimization products as broadly applicable as possible.
Nevertheless, no product can be all things to all people. Understanding how these products work may not cause administrators to change their functionality, but it will help understand how specific results are achieved and whether to adapt the deployment accordingly.
Dig Deeper on SSD utility and application tools