Flash technologies are cool, crazy fast and they're going to remake our data centers and how we implement storage. But deploying flash can also be pretty confusing.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Recently, I was on a panel at the Flash Memory Summit and Exhibition conference held in Silicon Valley. The topic of the session was why flash storage should be considered an investment rather than just a cost. The panel consisted of three storage vendors and me. I'm not sure if I was there as an antidote to the vendors or for comic relief. If the latter, I still took my job on the panel seriously.
The three amigos were smart guys who really knew storage and had some strong opinions about implementing solid-state storage. So, it didn't take long for them to start going at each other (in a very gentlemanly manner) about whose approach to "flashing up" storage was better. It was a minor turf skirmish based on a completely normal impulse to stand one's ground: Defend motherhood, the flag and the product portfolio -- but not necessarily in that order.
Basically, the tiff centered on how much flash technology you need and where you should put it. It's a debate reminiscent of the vendor squabbles from a few years back over where storage virtualization should be implemented. Ultimately, that debate was resolved with the familiar conclusion of "it depends." Like all things in IT, it depended on the problem being solved, the current environment, and available budget and expertise. It's the same with flash today.
The whole flash-in-the-enterprise scene has developed a lot faster than most observers expected, and the technology's potential to be truly transformative can't be understated. But one of the coolest things about the flash revolution is that it's battling traditional storage on all fronts with an array (pun intended) of deployment options and technical implementations. But that's also one of the most confusing things about flash technologies.
Ignoring the end user or consumer-targeted flash stuff such as USB thumb drives, CompactFlash cards, Secure Digital cards and those tiny fingernail-sized MicroSD slivers that add gigs of storage to your phone, there are still a bevy of enterprise choices.
You can get a flash drive that looks like a typical 3.5- or 2.5-inch SATA or SAS hard disk and plugs into one of those bays. Or get your flash on a PCI Express (PCIe) card that slots into a server. You can also get a traditional array that blends some flash storage with hard disk drives. Or throw tradition out the window and opt for an all-flash storage array. But wait, there's more. There are flash appliances that operate as caches in front of hard disk arrays, and now there's even flash storage that plugs into the DIMM slots that are also used for DRAM.
But before you make the form-factor/implementation decision, you need to decide how you're going to use flash for the greatest impact on performance. As persistent storage, it's the fastest stuff around and can easily eliminate bottlenecks caused by latency and deliver mind-boggling IOPS. But it can also be used as a cache, essentially augmenting and super-sizing server memory and releasing I/O-bound apps from the constraints of limited amounts of expensive DRAM.
If you need fast flash storage, any of the previous implementations will do the trick -- depending, of course, on your environment. If your flash is going to be cache, the implementation choices aren't reduced by much, as the caching can happen in the server (PCIe and SAS/SATA plug-ins), in an appliance or even at the array.
And as with any newish technology, it gets even more complicated when you look at specific products and how they leverage flash. For example, some caching apps are hands off when it comes to server DRAM, while others will integrate it and effectively create a tiered cache setup.
There's also a fair amount of confusion about the difference between caching and automated tiering, which solid-state has helped bring back into vogue.
Flash technologies are ushering in a new age of storage, and some of the old paradigms just don't cut it anymore. We didn't have these discussions, debates and contentions when we lived in a hard disk-only world. I don't remember ever hearing a storage vendor extolling the caching cachet of a collection of hard disks. And while you might grapple with the question of where to put flash, that's a pretty basic discussion with hard disks: You can put it here (server) or there (array).
What's a storage manager to do? First, learn as much as you can about the technology and products (shameless plug: Try SearchSolidStateStorage.com first). But you also need to lean on your storage vendors to get accurate, useful and comparable information about their solid-state products. One problem we're having is that vendors are very selective about the metrics used to describe a product's capabilities. Too often, a storage manager gets stuck with an apples-to-oranges comparison when trying to evaluate products. Greater standardization in this area would not only help the IT professionals struggling to evaluate these products, but help the vendors selling them.
About the author:
Rich Castagna is editorial director of TechTarget's Storage Media Group.