I've heard mixed messages about deploying SSDs. On one hand, some say that you will see a performance increase...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
if you put flash into an existing array for a high-performance Tier 0. Others say that doesn't really work and the array needs to be designed specifically for SSDs. What do you think?
There are really two questions in this question. The first is, will a user always see a performance increase when SSDs are added to an existing array as Tier 0 storage or caching in the array? The answer is no, not always. Applications that require primarily sequential reads and writes will most likely see limited to no performance improvement over HDDs.
The second question is whether or not SSD arrays must be architected for SSD performance to experience significant performance improvement. Putting SSDs into an array not architected to maximize the performance gains will most likely see substantial performance gains. However, the rest of the array's architecture will likely limit those gains by shifting the performance bottleneck. Instead of the HDDs being the performance bottleneck, it will shift to the back end of the array (I/O between the array's CPU and SSDs) or the CPU's storage bus, or even the CPU itself. There is no free lunch.
An array architected from the ground up specifically for SSD performance gains looks holistically at the entire array's performance architecture to optimize end-to-end performance. The downside is that these arrays lack the significant production maturity that only comes with time.
There is no single, best choice. It will come down to application requirements in performance, cost, data protection, DR and more. In many cases, implementing SSDs within a current generation array is good enough. In others, it is not.
Related Q&A from Marc Staimer
Latency in object stores that stems from a large amount of metadata means the technology is better suited to non-transactional data.continue reading
Eventual consistency in object stores can be an issue because object storage is spread over many nodes and up-to-date data may not always be ...continue reading
HDD failure can put bytes of data at risk. Is multi-copy mirroring or erasure coding the more efficient data protection approach?continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.