Flash options for the array or server
A comprehensive collection of articles, videos and more, hand-picked by our editors
One of the most important factors in improving physical or virtual server application I/O performance is to reduce latency between the initiator of the I/O and the target storage. Latency is the diabolical enemy of application performance. It's simple math: As latency increases, performance decreases.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Latency is directly correlated to several factors:
- Distance latency, which is tied to the speed of light
- Storage protocol latency -- TCP/IP, FCP, iSCSI, FCoE, AoE, SCSI, SATA, SAS
- I/O processing step latency traversing the path between application and target storage, including controllers for PCIe, SAS, SATA, FC, etc.
- Switching latency
- Contention latency
- Storage access/write latency (which is relatively high in rotating media)
Latency is additive. The conventional wisdom about reducing latency and improving I/O performance is to eliminate or mitigate as many of these factors as possible.
Each of these six types of latency can be reduced:
- Mitigate distance latency by placing target storage as close as possible to the I/O initiator.
- Alleviate storage protocol latency by using the protocol requiring the least amount of processing or conversion.
- Moderate I/O processing latency by eliminating some of the steps/controllers the I/O process must traverse
- Jettison switching latency by not going through any switches.
- Lessen contention latency by curtailing number of shared fabrics the I/O must pass through, etc.
PCIe flash drives
It is storage access/write latency that storage admins are most focused on solving, even though it is just one part of the latency picture.
Mitigating that portion of the latency is relatively easy today simply by utilizing flash-based storage drives. A SATA, SAS, FC, or FCoE flash storage drive (aka solid-state drive or SSD) installed in a server does little to mitigate the other latencies discussed above.
PCIe flash drives, on the other hand, do mitigate those latencies. They are directly on the PCIe channel versus being connected to storage SAS, SATA or FC controller on the PCIe channel. They require very few path steps from the application to the target storage. They have no switching contention. Their only contention is from other devices on the PCIe channel.
The ability of PCIe flash drives to mitigate or eliminate all of these latency factors makes it is easy to see how they improve application I/O performance. PCIe flash drive scalability has been steadily increasing as NAND dies have continued to decrease. Today, there are high-capacity PCIe flash drives that range from 3.2 TB to 4.8 TB. A server with four available PCIe slots can have as much as 19.2 TB of high-performance, low-latency flash storage. Expect that to continue to climb as NAND moves to 19nm and 20nm, as well as 3-D.
It's also easy to understand how PCIe flash drives have dominated server-side flash over the past several years. However, no technology is perfect. PCIe flash drives are no exception. There are substantial limitations. Those PCIe flash drives are limited to the physical server in which they are installed. They are local storage only. Sharing them among multiple servers requires software that virtualizes and converts them into shared storage. This adds latency and reduces I/O performance. The amount of latency added is dependent on the server network interconnect and TCP/IP. Another latency reduction methodology for virtualizing/sharing flash PCIe drives is to make a data copy on separate cards on different physical servers. That data copy means flash PCIe capacity is doubled, at a minimum (more cost).
Then there are the limited PCIe slots available in any server. PCIe server slots commonly range from two to eight (there can be more depending on the vendor but not much more). Those PCIe slots are utilized for many different adapters, such as Ethernet network interface cards, FC, etc., limiting the number available for flash PCIe drives. Blade and twin servers are much more severely PCIe slot- and form-factor-constrained.
Perhaps the biggest issue with PCIe flash drives over the past several years has been the wide array of proprietary software drivers. Each vendor had its own drivers, APIs and software. This made it difficult to utilize multiple vendors or third-party caching software, or even to switch vendors.
However, the industry has come together to fix this problem. Major market leaders (Intel, Dell, SanDisk, EMC, NetApp, Samsung, Micron, Avago/LSI, Seagate, PMC, Oracle, Cisco, Western Digital/HGST, and others), have organized to standardize PCIe flash drive/SSD drivers and APIs with non-volatile Memory Express (NVMe). All of these vendors and most other PCIe flash drive vendors have committed to NVMe. This should eliminate this issue over time.
Server virtualization additionally requires storage to be shared in order to utilize many of its features and functions users require, such as increased availability and uptime. To utilize PCIe flash drives with shared storage requires additional software. That software can be write-through caching software (aka read caching) or write-back caching. Both add latency and increase licensing and operational costs.
Write-through caching is the most common. It passes all writes to the shared storage and pulls the hot reads into the PCIe flash cache. Write-through caching does nothing to accelerate write I/Os. It does, however, accelerate read I/Os. Write-back caching commits the writes to the PCIe flash drive where it is acknowledged. Writes are also copied to shared storage after initial write acknowledgement. Write-back caching does accelerate writes, but also accelerates flash wear and tear.
Flash in-memory storage
Flash in-memory storage places flash storage even closer to application I/O by putting it into DRAM dual inline memory module (DIMM) slots, making it look like DDR3 memory. There are fewer speed bumps in the path, such as I/O hubs (recently incorporated into the latest Intel Xeon chipsets), PCIe controllers or storage controllers to go through. There is no I/O contention to manage with other devices, as occurs on the PCIe channel. The distance between the application and in-memory storage cannot be any shorter, because it resides in the memory channel. There is a noticeable improvement in latencies with writes averaging less than 5 µs versus two to three times that number for PCIe flash drives. Reads are approximately the same.
But, the real flash in-memory storage performance edge is latency consistency. Flash in-memory storage has very low latency variability, whereas PCIe flash drive variability is demonstrably much greater. Testing results by one financial services company showed a difference of more than three orders of magnitude (more than 1,000 times) greater variability in the PCIe flash drive than flash in-memory storage.
That's not trivial.
Applications that are memory-constrained with high paging rates will appreciate that consistent low latency. Among these applications are in-memory databases, high-frequency trading, derivatives trading, Black Scholes modeling, BGM/LIBOR modeling, oil and gas reservoir modeling, seismic data interpretation, fluid/flow simulation, turbulence modeling, 3-D modeling, animation rendering, CGI, protein-to-genome matching, and more.
Real estate on a DIMM is somewhat limited, but 19 nm NAND dies allow for 200 GB or 400 GB flash DIMMs. And just as PCIe flash drives will benefit from smaller NAND dies, as well as 3D NAND, so will flash in-memory storage.
Server DIMM real estate is a better story. Whereas PCIe slots in a server are limited, DIMM slots are far more numerous. Take, for example, Intel's latest quad-socket (4 Xeon E7-4890) motherboards. There are 96 DIMM total slots, or 24 per socket. Assuming a minimum of two DIMM slots for standard DRAM per socket leaves 88 DIMM slots for in-memory flash storage or approximately 35.2 TB of really fast flash storage.
Just like PCIe flash drives, flash in-memory storage has its downsides. There is presently only one vendor (Diablo Technologies), and it's currently selling its flash in-memory storage through SanDisk. SanDisk Smart Storage has paired Diablo's technology with its proprietary Guardian software to boost flash performance, increase NAND endurance, reduce write amplification, and deliver exceptional error detection and correction. The combined products are marketed as ULLtraDIMM.
Expect other vendors to enter the flash in-memory storage market quickly. But until that time, economic rules of scarcity mean flash in-memory storage will have a higher price per gigabyte than PCIe flash drives. IBM is an early OEM of the technology (eXFlash) in its System X. IBM MSRP demonstrates there is approximately a three-fold to four-fold price premium per gigabyte for flash in-memory storage over PCIe flash drives.
So, who wins the battle?
Like most technology comparisons, it depends. When application latency consistency that's highly deterministic with low variability is important, flash in-memory storage has the edge. When applications are memory-constrained or server real estate is tight (few PCIe slots, blade servers, etc.), flash in-memory storage again has the edge. On the other hand, when technology maturity, industry standardization, vendor choice and cost are important, PCIe flash drives have the edge.
About the author:
Marc Staimer is founder and senior analyst at Dragon Slayer Consulting in Beaverton, Ore. Marc can be reached at firstname.lastname@example.org.
Memory channel flash use cases explained
Crump: DIMM flash has brighter future than PCIe flash for storage
NVDIMM vs. memory channel flash storage