JNT Visual - Fotolia
I've been interested in the ultra-high performance end of the storage business since I used head-per-track disk drives to speed up a digital special effects system in the 1980s. There has always been demand for a technology that can help deep-pocketed customers solve high-value problems faster.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Ever since dynamic RAM SSDs replaced head-per-track disks, storage for applications on Tier 0 has been fast, proprietary and has offered few features beyond storing and delivering bits really fast.
This is all changing with the nonvolatile memory express (NVMe) protocol, the technology behind the new generation of Tier 0 storage.
The flash translation layer (FTL) was the secret sauce in the last generation of Tier 0 storage. Violin Memory, Fusion-io and Texas Memory Systems (TMS) had to build field programmable gate arrays and application-specific integrated circuits to take raw flash and make it emulate a hard drive.
After the FTL became commoditized into SSDs, vendors such as Pure Storage Inc. and SolidFire concentrated their efforts on software. Software's faster development cadence let them deliver enterprise feature sets with enough performance to satisfy 90% of the potential customer base, moving the performance expectation for Tier 1 storage systems to several hundred thousand IOPS, with latencies under 1 millisecond.
The remaining performance gap between Tier 0 and Tier 1 wasn't big enough to prompt users to give up snapshots and data deduplication. That development forced Violin into bankruptcy and TMS into IBM's embrace, and IBM has done a nice job of combining its own FTL with array software.
Today, the performance world is coalescing around NVMe, a standard programming interface for PCI Express SSDs. The major players on the component front are pushing NVMe as the next big thing, from laptops with M.2 slots, to add-in cards and U.2 hot swappable drives in the next generation of storage systems.
Next step: Networking
The next phase will be finding a way to share those low-latency NVMe SSDs. Over the past year or two, a new crop of startups, including Apeiron Data Systems, Mangstor, E8 Storage and Excelero, have sprung up, building products based on forms of NVMe over networks. These systems deliver hundreds of thousands of IOPS, at latencies in the 1 microsecond to 200 microsecond range. They deliver shared storage, but, like Violin and TMS, not much in the way of data services.
Just as these products came to market, so did the NVMe over Fabrics (NVMe-oF) standard that extends the NVMe command set across a low latency Remote Direct Memory Access (RDMA) network, such as InfiniBand or 100 Gbps Ethernet using RDMA over Converged Ethernet or the internet Wide Area RDMA Protocol. Intel has even built low overhead NVMe-oF drivers in its Storage Performance Development Kit.
I think all of this commoditization is going to mean the new Tier 0 storage vendors have an even shorter window of opportunity than Violin and TMS had. It has already killed off Dell EMC's DSSD, which turned out to be an excellent example of how great custom hardware projects can get their legs cut out from under them by a commodity alternative.
Pure Storage's new FlashArray//X uses 20 NVMe flash modules, while still delivering iSCSI and Fibre Channel LUNs. Tegile's current systems have four U.2 slots that will serve as a performance tier when Tegile qualifies U.2 dual-port SSDs.
The race is on for the new Tier 0 storage players to deliver snapshots and other basic data services before Pure Storage, Tegile and others can deliver latencies in the 2 milliseconds to 400 milliseconds range. If the new Tier 0 players lose that race, they will be limited to a few potential customers.
If nothing else, it's fun to watch.
All about NVMe's history and future
Is development of NVMe ahead of demand?
Pinning down where flash storage stands today