Problem solve Get help with specific problems with your technologies, process and projects.

When using SSD is a bad idea

As with most technologies, it’s as important to know when to use SSD as it is when not to use it. Here are some cases when using SSD is a bad idea.

SSD has established its position in the data center. Nearly all major vendors specify a Tier 0 in their best-practice architectures. Server-side SSD is being used to enhance server performance and storage-side SSD eliminates the boot-storm bottleneck. As with most technologies, though, it’s as important to know when not to use it as it is when to use it. Here are some cases where not to use SSD.

Don’t use SSD when applications are not read-intensive. SSD is brilliant for read-access times. It can outperform HDD by 10X or more. There is no free lunch, however, as SSD loses all of its benefits in the write category. Writes not only lag, but they also wear out the SSD memory cells. Memory cells have an average write life after which the cells begin to burn out (see your vendor for details of its specific system). As cells fail, overall performance degrades. Eventually, the SSD must be replaced to restore full performance and we all know SSD is not cheap. Some vendors do offer extensive warranties.

So what is the magic line for a read/write ratio? There probably isn’t one, but start with 90/10 as ideal. Application requirements may dictate a compromise in this regard, but knowing permits IT managers to make a conscious decision. If the ratio is below 50/50, then obviously an HDD would be a better choice. Here, from an application performance perspective, the SSD read performance is being offset by the inferior write performance.

Finally, if SSD is needed for read performance but writes are an issue, consider some of the vendors that employ wear-leveling mechanisms and minimize write-amplification to reduce the impact. SSD size will also be a factor. Going cheap on the SSD increases thrashing as it reduces the chances of a recursive read.

Don’t use SSD when data access is highly random. SSD is sometime referred to as “cache-tier” and the name is apropos. Fundamentally, it is a cache that eliminates the need to perform a “fetch” to a hard-drive when the data is cache-resident. Applications with highly random access requirements simply won’t benefit from SSD – the read will be directed by the array controller to the HDD and the SSD will be an expense with little benefit.

Don’t use general-purpose SSD in highly virtualized environments. OK, this one will generate some controversy, because there are some really good use cases for SSD with virtual machines, such as boot storms. However, many VMs accessing the same SSD results in highly random data patterns, at least from a storage perspective. When a hundreds of VMs are reading and writing from the same storage, one machine is constantly over-writing the other. However, there are SSD solutions designed specifically for virtual environments, which is why there’s a “general purpose” caveat above.

Don’t use server-side SSD for solving storage I/O bottlenecks. Server-side SSD is fundamentally server cache, which solves a processing problem and even a network bandwidth problem. Spreading SSD across hundreds of physical servers, equipping each server with its own SSD, may indeed help with I/O bottlenecks, but not nearly as effectively as the same aggregate capacity in a storage tier.

Don’t use Tier 0 for solving network bottlenecks. If data delivery is inhibited by the network, it’s obvious that optimizing the storage system behind the network will do little good. Server-side SSD may reduce the need to access the storage system and thereby reduce the network demand.

Don’t deploy consumer-grade SSD for enterprise applications. SSD is manufactured in three grades: single-layer cell (SLC), multi-layer cell (MLC) and enterprise multi-layer cell (eMLC). MLC is considered consumer-grade and found in most off-the shelf applications. It has a life of 3,000-10,000 write operations per cell. SLC, or enterprise grade, has a life of up to 100,000 write operations per cell. eMLC attempts to strike a balance between price and performance, offering around 30,000 writes per cell but at a lower price point than SLC. Caveat emptor, as you get what you pay for.

Phil Goodwin is a storage consultant and freelance writer.

This was last published in June 2012

Dig Deeper on Solid state storage implementation



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Hi Phil, I think you missed out a word in your article.

You say, "Don’t use SSD when data access is highly random", I think you mean "Don’t use SSD cache when data access is highly random".

High random read is exactly when you get the greatest benefit from normal, non-cache SSD!

Otherwise, nice article!


I am bit confused after reding this article. I always read that random read is faster with SSD.
Do you have any benchmark to prove your stetment?
Thank you for an article.
It is news to me that there is limited life time for write operation.
Well, in order to read the data it must be put (write) into drive first.
Do you have any recommendations about data batch load (write) in to SSD?
What is the maximum size I can write into one cell?
I believe this to be a flawed article on many levels. SSD is very effective with write operations. Also lets not confuse SSD with Flash Caches and native flash arrays. MLC can be deployed to great effect in the Enterprise. Enterprise vendors have developed sophisticated methods to address many 'limitations' of flash storage.
There is much wrong with this because alot of the use cases depend on systems design. SSD can be very effective for write cache. A Starboard Storage Array for instance is explicitly designed to enable multiple applications with a variety of read and write profiles to operate well on a consolidated storage system.
Sorry Phil but you are wrong on almost every case, perhaps I'm just missing your context - what purpose are you talking about - file server/archive/database?

For a start, the write problems with flash has been resolved many times over in both the commodity and enterprise space through a mixture of over-provisioning and software (or in-controller) wear levelling.

The biggest benefit SSD's give is the consistent low latency IO be it random or sequential - perhaps you are just talking about them in a cache deployment in which I totally agree; in a database context I can't see the point of tiering in that manner but it depends on the type of querying of the database.

They are ideal for resolving IO bottlenecks - disk based (mechanical) solutions require short stroking to get any sort of performance out of the drive at a realistic semi-random workload.

Disk in reality should be relegated to archive/backup to replace tape. In terms of cost SSD in a real solution (we base real solutions on IOps) are a lot cheaper than the infrastructure and kit required for a mechanical disk solution.
You really don't understand nothing about SSD's. If this article was wrote 5 years ago i would concur, not now. Please take a look at Fusion-IO, Violin storage. Even a 200GB SAS SLC can have >22k IOPs in random writes. And since it´s lifespan is measured in PB, an SLC can write about 120PB worth of data.
Hey Phil, Thank you for sharing your thoughts; however, I have to respectfully disagree with you. I think that recent advancements in SSD technology enable SSDs to be a suitable choice in the scenarios you mention. I recently wrote a blog post for SMART Storage Systems reacting to points in your article, which can be found here: If you have any questions or feedback on the points I made, please don't hesitate to leave a comment or reach out to us!

-Bernie Rub, Vice President and Chief Technology Officer, SMART Storage Systems