SSD arrays are often called Tier 0, SSD tier or cache tier. An SSD array solves the I/O problem of aggregate throughput without resorting to massive numbers of spindles and inefficiently populated HDDs.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
All-SSD storage arrays are often overlooked in their own right. These standalone devices provide additional benefits, such as optimized data location and the flexibility of device-independent deployment. They do add a group of devices to be managed, so the benefits must outweigh the added administration and maintenance. Here are four use cases where an SSD array might be just the right solution.
Use case #1: Front end-virtualized for internal cloud storage. Organizations reduce costs by increasingly using lower-cost Tier 3 (SATA) storage, especially for internal clouds. Unfortunately, the throughput of these high-capacity, low-performance drives may not deliver sufficient I/O. In these situations, IT managers should consider solid-state arrays and automated-tiering software to manage the data movement between tiers.
Use case #2: Data location for file services. Global organizations using common files to collaborate can use SSD arrays to position the data geographically or simply elevate it to the fastest tier for maximum throughput. The information could include engineering documents or video development files. For Internet content providers, this may include video files, music files or other downloads.
Use case #3: VDI boot storms. Many organizations boot servers from network storage in order to ensure a common image, as well as simplify updates and maintenance. With standard HDD arrays, the resulting traffic jam during “rush hours” would yield sufficiently poor performance that some organizations shied away from the practice. SSD’s exceptional recursive read performance makes it a perfect way to beat the boot storm.
Use case #4: Hybrid cloud. Hosting infrequently-used information on public cloud storage can be an excellent way to reduce the cost of maintaining this data. However, the latency of this configuration may be unacceptable. By positioning an SSD array in the home data center to cache “hot” data from the remote cloud, IT organizations can have the best of both worlds—on-premises high-performance storage and massive storage capacity offsite. Of course, the network interconnect requires redundancy to assure full availability.
In all implementations, SSD solves the I/O data delivery problem. The trick to getting the desired results is determining exactly where that problem exists. Moreover, simply increasing aggregate data throughput without considering possible countervailing latency introduced by network connections may result in a lot of money spent without a lot of gain.
If the potential uses of Tier 0 and SSD arrays sound similar, they are. The difference between them is data location. When devices are within the same data center, Tier 0 is likely to fulfill the requirements without adding more devices to manage. While SSD arrays can also meet the performance requirements within data centers, they are most applicable for facilitating rapid data access when devices are geographically diverse. So, it boils down to two considerations: time and distance. Address those two considerations and you’re on the way to the right decision.
Phil Goodwin is a storage consultant and freelance writer.