Essential Guide

Flash options in the array vs. server

A comprehensive collection of articles, videos and more, hand-picked by our editors

Crump: DIMM flash has brighter future than PCIe flash for storage

PCIe drives require a constant board and specific OS drivers, while DIMM slots push storage closer to servers. But DIMM is costly due to low production.


Flash options in the array vs. server

+ Show More

George Crump sees a meteoric rise -- and equally precipitous decline -- for PCI Express (PCIe) flash drives for storage. The president and founder of analyst firm Storage Switzerland said he understands why PCIe NAND flash technology is a hot topic of discussion among storage architects -- particularly because of its ability to reduce application latency for enterprises that need help with bandwidth limitation. But the future of server-side flash storage, Crump said, rests with the evolution of  dual in-line memory module (DIMM) technology, which thus far has one manufacturer: Diablo Technologies Inc., which makes its branded ULLtraDIMM solid-state architecture that is distributed by channel partner SanDisk Corp.

In this Q&A, Crump outlines the similarities and some key differences between the two flash technologies and spells out why he thinks DIMM could soon overtake PCIe for server-side flash.

What's the biggest benefit that flash provides for storage?

Crump: Flash is all about removing levels of latency. For example, one reason you wouldn't want to do a shared flash array, and instead [would] put PCIe flash inside your server, is to remove the latency of the storage network. One reason you would want a native PCIe flash -- kind of a RAID-based PCIe flash -- is to avoid latency of the storage I/O stack. One of the reasons to go with DIMM flash is it removes the latency of the PCIe bus itself, and puts the flash module into a memory slot. It's essentially a network of eight slots designed for memory.

What role do you see PCIe flash playing in storage near term?

Crump: I think PCIe will be a medium-term use case, meaning it's not going to be gone next year. But potentially in five years, it could be gone. If you've got a performance problem now, and your server vendor doesn't support DIMM flash or DIMM pricing isn't attractive yet, you probably would go with PCIe flash. In my opinion, all flash purchases should be temporal in nature. You're not going to buy flash and expect to get 10 years out of it anyway.

What are the biggest drawbacks to using PCIe flash?

Crump: There are several. The first is that without adding special software, PCIe flash is dedicated to one server. If I buy a 1 TB PCIe card, but only need 300 GB of flash, I just wasted 700 GB of pretty expensive storage. Think of all the criticisms we've heard about the limitations of direct-attached storage; they also apply to PCIe flash.

The second drawback is that native PCIe flash is a constant board and it requires a drive to operate. So essentially you have to boot from somewhere else to load the drivers before you can access the PCIe flash card. That also means the PCIe flash vendor has to write specific drivers for specific operating systems and get qualified on those platforms as well.

On the other side of the equation are RAID controller-based PCIe flash vendors like LSI Corp. (acquired in April 2014 by Avago Technologies Ltd.), which take RAID controllers and embed flash modules directly on that PCIe card. That has the advantage of not requiring another driver, but it has to go through the traditional storage protocol stack of SCSI.

DIMM flash modules and PCIe drives both rely on the NAND. Is there a different configuration between the two?

Crump: The only difference at that level is really just a matter of a different interface. It's how the NAND will respond to I/O requests and the type of channel available to it. NAND memory is [manufactured] basically as a square with a bunch of cells that are charged. There's a big argument over whether you should use voltage or [other] energy, but essentially, those cells are charged with either a 1 or a 0. That charge determines which type of data goes on a particular cell. Where the flash controller vendors add value is being able to adapt the charge to get maximum life, maximum capacity and maximum performance out of the NAND itself.

Is there a way to approximate differences in latency between PCIe and DIMM flash?

Crump: We're talking about a reduction of probably dozens of microseconds of latency with using DIMM flash. The biggest difference is that DIMM slots eliminate unpredictability that comes with a shared PCIe bus, where you have a couple network cards, a RAID controller and other network components competing for the same I/O path. With the right application, that difference in latency can be very noticeable; whereas with DIMM, the only thing running on the network is memory-based modules: DRAM and flash.

What is an ideal use case for DIMM flash?

Crump: It would be good for any applications that benefit from extreme low latency. Big data analytics and high-frequency trading are two areas where a move to DIMM might make sense. Quite frankly, it could be simply a matter of architecture: You might not have any PCIe card slots available in your box, but you probably will have DIMM slots available.

DIMM gets presented in one of two ways. It can be accessible as storage -- meaning it shows up as another drive -- or it can be accessible as memory, in that it shows up as RAM. As we start to create these very large virtual environments, where we go from having a dozen virtual machines per host to four or five dozen VMs [virtual machines] per host, the cost of memory becomes a big issue.

What's holding DIMM flash back from greater adoption?

Crump: The reality of DIMM is that it requires the server to make changes to the read-only memory for the basic input/output operating system [ROM BIOS] to support it. In servers, the ROM BIOS expects to find dynamic random access memory [DRAM], not flash in the DIMM slot. So [manufacturers] have to go in and change the BIOS. The only ones I know that have publicly announced plans to update the BIOS for flash are IBM and SuperMicro. To my knowledge, HP and Dell have not.

How soon before DIMM overtakes PCIe as flash-storage memory?

Crump: The whole industry is going to move very, very rapidly. There's going to be things PCIe vendors will do to address some of the challenges that make [switching to DIMM] a harder decision. And we're just beginning to hear what the flash DIMM vendors can do. And then there's a whole host of end customers that don't need either PCIe or DIMM flash. At best, they need the performance of a solid-state drive.

What does the potential growth of DIMM flash adoption betoken for PCIe flash vendors? Will they diversify into DIMM?

Crump: Oh absolutely. I don't think there's any question that they'll be into DIMM pretty quickly. Remember, especially in the flash world, the term 'manufacturer' is a little gray. At the end of the day, NAND is made by four companies: Toshiba, Micron, SanDisk and Samsung. And all those companies make controllers. Other flash vendors, such as Fusion IO and Virident (part of Western Digital subsidiary HGST), add value at the flash controller layer. And then you have a whole bunch of [vendors] that buy those two components [to build their own storage products].

How would you handicap future pricing for flash?

Crump: I do think there's a good chance DIMM flash could eventually be less expensive on a price-per-gigabyte basis than PCIe flash. We haven't seen enough real pricing yet, but as DIMM becomes more widely available, I believe it will become a cheaper alternative to PCIe.

Diablo Technologies dominates the DIMM space now. How soon before competitors emerge?

Crump: Clearly there is some intellectual property that has to be perfected for DIMM to take off. The major NAND guys may not be saying it publicly, but they're all working toward that as an option. I haven't seen a lot of information about why you wouldn't want DIMM flash. The only real drawbacks are its initial cost, because it's in limited volumes, and the fact that it essentially doesn't work unless your vendor tools it for you.

It's sort of like the move to high-definition TVs. That created a tug of war between TV broadcasters who said, 'We're not going to start transmitting HD signals until people buy up HDTVs.' And people said, 'We're not going to buy up HDTVs until you start transmitting an HDTV signal.' It's sort of the same thing with flash DIMMs: No one is going to buy it until the server manufacturers support it, and the server manufacturers aren't going to support it until people start buying it. We'll be there at some point with DIMM flash; it's a matter of when, not if.

Next Steps

Learn about NVDIMM vs. memory channel flash storage

Which is better for your organization: Memory channel flash storage vs. PCIe flash storage?

Solid-state memory channel storage takes on latency issues

This was last published in July 2014



Find more PRO+ content and other member only offers, here.

Essential Guide

Flash options in the array vs. server

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do you see a potential use case for DIMM flash in your environment today?