Flash options for the array or server
A comprehensive collection of articles, videos and more, hand-picked by our editors
Flash has rapidly moved from a niche storage product that was used to differentiate storage vendors to a ubiquitous technology. The majority of success around flash storage has come from the need -- or perceived need -- for speed by mission-critical applications. Here is a list of the five most pervasive myths about flash-based storage with an added dose of reality.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Myth 1: Hybrid or all-flash arrays solve all storage-based performance problems
Hybrid and all-flash arrays are all the rage in the data storage industry. For example, flash-based storage adds hundreds of thousands to millions of potential IOPS plus huge gains in throughput. But do these arrays solve all storage-based performance problems? No.
The issue is latency. Putting all those high-performance flash solid-state drives (SSDs) inside the storage array increases the latency between the storage controller and the drive media, typically by multiple orders of magnitude. But it does nothing for the latency between the application server (physical or virtual) and the storage array.
This is the latency of the data packets going from server memory and over the storage bridge to the PCI Express (PCIe) controller, Fibre Channel/Ethernet/InfiniBand adapter, transceiver, cable, transceiver, switch/multiple switches, cable, storage adapter, PCIe bus, PCIe controller, bridge, memory and CPU, and then back over the bridge to the PCIe controller, PCIe bus, SAS controller, transceiver, cable, transceiver, SSD controller and so on. That's a lot of latency that flash SSD performance is unable to improve. Additionally, flash SSDs can be constrained by the performance limitations of the storage controller.
When the lowest possible latency is essential to application performance (high-frequency trading, simulations, media and entertainment, flow analysis, modeling, 3-D CAD/CAM), flash SSD performance is best served directly from the application server's DIMM slots or PCIe slots. This eliminates the vast majority of the latency.
Myth 2: Flash drives in arrays are more enterprise-ready than those in servers
Like most myths, this one contains a grain of truth and a bit of exaggeration. Array vendors test, burn-in and certify that all the drives in their arrays reduce drive failures, maintenance costs, downtime and lost data -- which is a very good thing. But flash SSDs and hard disk drives (HDDs) are quite different. Burning-in HDDs allows vendors to remove bad HDDs before they go out the door. But burning-in SSDs typically only provides the limited benefit of causing SSD performance to drop to the steady state sooner.
The other claim is from array-based error correcting code that goes beyond that provided by the SSD.
Technically, there is a slight advantage to users when SSDs are embedded in an external array. But are these two items enough to claim that SSDs in a hybrid or all-flash array are more enterprise-ready than those embedded in servers? That's a subjective decision not necessarily related to the facts.
Myth 3: SSDs cost a lot more than HDDs
This perception has been around since the first SSDs hit the market, and it depends on several factors. First, it is based primarily on acquisition costs. Flash-based storage costs are dropping quickly, while the price of HDDs is falling much more slowly. As NAND flash manufacturers made NAND chips smaller and smaller following the Moore's law curve, acquisition price per gigabyte (GB) plummeted. As flash SSD manufacturers harden enterprise multi-level cell and MLC NAND chips with better ECC and wear-leveling, price per GB has decreased even further. And as 3-D NAND makes its way into flash SSDs, acquisition costs will again drop dramatically per GB.
What does this mean when comparing SSDs to HDDs? High-performance 15,000 rpm and even 10,000 rpm HDDs are going away. Acquisition costs have become comparable while operating costs are definitively lower for SSDs. IBM recently determined that SSDs have an 11% lower acquisition cost than equivalent high-performance HDDs and a 28% lower total cost of ownership (TCO).
Even as SSDs have relegated higher-performance HDDs to an increasingly smaller market share, large-capacity, lower performance 7,200 rpm HDDs have become increasingly popular for secondary storage applications such as active archive, backup, online analytical processing, big data/big data processing and compliance. How do flash-based SSDs compare to HDD costs in these applications? Most IT pros will infer "not as well," and they would be correct.
The largest capacity HDD shipping today is 8 TB in a 3.5-inch form factor. The largest capacity read-optimized SSD shipping this summer is 8 TB in a 2.5-inch form factor. Acquisition costs favor the HDD by a factor of 10. But power, cooling, rack space, floor space, weight and serviceability (among other factors) favor the SSD. And when deduplication and compression are applied to those SSDs, the TCO cost curve shifts markedly to the SSD. (Note: Deduplication/compression is currently an external array feature. That will be part of the software-defined data center in the near future.) So, do SSDs cost a lot more than HDDs? No. Do they cost more than HDDs? Sometimes, and sometimes they cost less.
Myth 4: Hyper-converged systems cannot be all-flash
This belief stems from when initial hyper-converged systems did not support an all-flash SSD configuration. But that changed with the release of VMware vSphere 6.0 and VSAN 2.0. Hyper-convergence based on the latest VMware software does support all-flash SSD configurations. In fact, Dell is now selling all-flash hyper-converged systems.
Myth 5: Flash SSDs are faster than HDDs
Ask an IT pro if SSDs are faster than HDDs and few, if any, will disagree. In the vast majority of use cases, SSDs are faster than HDDs. However, there are specific circumstances when that is not true. SSDs will almost always provide more random IOPS at lower latencies than HDDs except when flash storage is closing in on the end of its flash write wear-life. Another case where this statement does not hold true is for sequential reads and writes. HDD performance shines for sequential reads and writes. The SSD-to-HDD performance differential in sequential reads and writes is nominal.
Why flash storage technology isn't a cure-all
The pros and cons of flash storage in servers
The ins and outs of deploying flash-based storage