Flash options in the array vs. server
A comprehensive collection of articles, videos and more, hand-picked by our editors
There are six different flash SSD storage implementations today. Each is primarily aimed at reducing latency, improving performance in IOPS and throughput, while secondarily aimed at reducing storage total cost of ownership (TCO). This first tip will provide a brief description and reveal the pros and cons of:
- PCIe flash SSD storage card(s) as cache or storage in the server.
- PCIe flash SSD storage card(s) as cache in a storage system (SAN storage or NAS).
- HDD form factor flash SSD(s) as NAS system or storage array cache.
PCIe flash SSD storage card(s) as cache or storage in the server.
Putting the flash SSD PCIe card locally in the server on the PCIe bus puts the cache closer to the application. There is no adapter, transceiver, network cable, switch, storage controller, etc., in the path. The short distance reduces the latency, speeding up all IO operations such as reads and writes. This is why these cards are typically called application accelerators vs. storage accelerators. This type of flash SSD is primarily block. When used as cache, it requires additional software that relies upon policies to move data into and out of the cache, such as first-in, first-out (FIFO).
Pros: Lowest latencies between applications and storage or storage caching. Makes a significant, noticeable and quantifiable difference for high-transactional and/or high-performance applications (OLTP, OLAP, rendering, genome processing, protein analysis, etc.)
Cons: High CPU resource utilization, ranging from 5 to 25%. Relatively low capacities, (although FusionIO has a 10TB double PCIe slot card). Cards are not shareable among multiple physical servers. Each physical server requires one or more cards. Not useful for virtual servers except as cache with caching software because VM portability and resilience requires shared storage. Caching software licensing is on a per-physical-server basis. Most of the caching software is block storage, making it somewhat useless in file based storage or applications. (Nevex is the exception.) Card management is on a per-card basis, increasing administrator management tasks resulting in a high total cost of ownership (TCO).
Best fits: Well-suited for high-performance compute clusters (HPC) where performance improvements in nanoseconds to microseconds are huge. Other solid fits include OLTP, OLAP, BI, social media, genome processing, protein processing, rendering, security, facial recognition, and seismic processing.
PCIe flash SSD storage card(s) as cache in a storage system (SAN storage or NAS).
PCIe flash SSD storage cards provide storage systems (storage vendor option) a lower cost, higher capacity, and slightly less-performing extension of the system’s DRAM. It’s a storage accelerator. Algorithms determine less frequently accessed data, which is quickly moved from the system’s DRAM to the flash PCIe SSD cache. That cache is an extension to memory. Administrators set policies for these caches, determining what type of data should be retained or “pinned” in flash cache (data not evicted from the cache). Use of PCIe flash SSDs as cache reduces latency to and from the storage system by reducing disk IO when satisfying read requests and in the case of NAS, metadata as well.
Pros: Reduces latencies from applications to shared storage. It works well with virtual servers, VDI, VM portability, and VM resilience. It’s shareable among physical and virtual servers. It requires no server resources.
Cons: Flash cache is size limited by available storage system PCIe slots. Users experience increased latencies and excessive response times because more frequent cache misses requiring requests to get the data from the HDDs. Any given storage system’s flash cache cannot be shared by any other storage system. The most severe performance bottleneck is most often the storage system’s CPU. As CPU utilization elevates, so does latency and user response times. Tends to be a very high or expensive TCO.
Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to heavy traffic applications such as Email. Does well at accelerating databases when indexes and hot files can be “pinned” to the cache.
HDD form factor flash SSD(s) as NAS system or storage array cache.
HDD form factor flash SSD storage cache is functionally similar to PCIe flash SSD storage as cache. It’s a storage accelerator with similar algorithms. Instead of going into the controller as PCIe SSD cards do, HDD form factor SSDs go behind the storage controller in HDD slots. Sitting behind the controller means higher capacities but higher latencies.
Pros: Reduces latency from applications to shared storage. Works well with virtual servers, VM portability, and VM resilience. It’s shareable among multiple physical and virtual servers while consuming no server resources. Lower TCO per GB than the PCIe form factor.
Cons: Capacities larger than PCIe flash SSDs, but limited by both flash SSD capacities and disk controller performance limitations. Users experience increased latencies and excessive response times because cache misses occur more frequently, redirecting requests to the HDDs. A storage system’s flash cache cannot be shared by any other storage system. The most severe performance bottleneck is commonly the storage controller increasing latency and user response times.
Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to virtual environments and heavy traffic applications such as email. Does a good job at accelerating databases when indexes and hot files can be “pinned” to the cache.
The next tip will provide a brief description and reveal the pros and cons of:
- HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array.
- HDD form factor flash SSD(s) as all SSD NAS or storage array.
- PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe).
About the author: Marc Staimer is the founder, senior analyst and CDS of Dragon Slayer Consulting in Beaverton, Ore. Marc can be reached at email@example.com.