Tip

Solid-state drives vs. hard disk drives: How to justify the cost of an SSD

What you'll learn: We'll tell you the best use cases for solid-state drives and why applications that directly impact revenue are a perfect fit for the technology.

There is much debate among data storage managers and vendors over when and where to use solid-state drives (SSDs).

Requires Free Membership to View

While an SSD offers a significant performance improvement over traditional mechanical hard disk drives (HDDs), they do so at a higher cost. So in times of increasingly tight budgets, how do you know when and where to use solid-state drive technology in your enterprise data storage infrastructure?

The reality is that when storage I/O performance is more important than cost, solid-state drives are actually less expensive than hard disk drives. In this case, the scale changes from a cost-per-TB strategy to a cost-per-IOP strategy. The key to this is identifying the application scenarios that justify the added cost of SSDs.

To justify the cost of an SSD, the performance of the application being used has to either significantly impact company revenue or negatively impact customer experience. A third consideration could be if the application affects internal user productivity, but ironically, those are sometimes harder to justify.

What applications are good SSD candidates?

To determine if an application is a good candidate for SSD technology, you need to do some storage diagnostics to locate I/O bottlenecks. Prior to running diagnostics, it's important to look at the CPU utilization of the server that the application is running on. In almost every case, if CPU utilization is high, there is more than likely a different bottleneck. If CPU utilization is relatively low, less than 40% as a rule of thumb, then there's a high probability that you have a storage I/O problem. It is possible for a bottleneck to exist in the network or host bus adapter (HBA), but in most cases network bandwidth is usually addressed before solid-state drives are even considered.

Most OSs have utilities that allow you to measure queue depth and response time to determine the potential ROI of an SSD. Perfmon for Windows, for example, can be used to capture this data. There are also more elaborate third-party tools and similar utilities available on other platforms.

An array's queue depth is the number of pending I/O requests for a volume. This is where you get the concept of increasing performance by adding more drives to a RAID group. The rationale is that each drive adds another channel for I/O and increases an array's ability to service I/O requests.

The challenge with this strategy is that performance-demanding applications can often generate enough storage I/Os to create queue depths that are in the hundreds. Adding hundreds of drives is clearly not an option due to cost and the waste of physical space that would result from the data sprawl. But solid-state drives can help. In most cases, a single SSD would eliminate the queue depth that would take hundreds of hard disk drives to service.

Solid-state drives have virtually no latency and respond almost instantly to I/O requests. Flash-based SSDs are especially strong at read requests and are actually more cost-effective than RAM-based alternatives.

Most storage I/O performance is read based, and read-based data typically needs higher capacity. An exception is high write scenarios, like database transaction logs. In this case, small RAM-based solid-state drives may be the best fit.

Solid-state drive reliability concerns

While there are a lot of misconceptions about the reliability of solid-state drives, today's enterprise flash-based SSDs are actually extremely reliable, with most having a 10-year life expectancy. Reliability concerns with solid-state drives are mostly around write endurance, which is essentially how many times you can write to a cell on a flash chip.

Enterprise flash-based SSDs use single-level cell (SLC) technology that writes only 1 Gb of data to each cell. Conversely, multi-level cell (MLC) flash technology writes multiple Gbs of technology to multiple cells. The result is that SLC-based systems offer faster write speeds, lower power consumption and higher cell endurance than MLC-based systems. Enterprise solid-state drives also use advanced controllers to make sure the cells are written to evenly. They also have extra cells to compensate for those that expire.

As solid-state drives continue to come down in price, the adoption of this technology will become increasingly more widespread. For now, storage managers should focus on applying solid-state drives on those applications within the organization that cost the company revenue or impact user experience.

BIO: George Crump is the lead analyst at Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.
 

This was first published in March 2010

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.