There's one thing vendors like to push about solid-state storage: speed. While traditionally that meant augmenting disk-based storage arrays with SSDs, vendors are increasingly looking to deploy solid-state on PCI Express cards within servers. What are the pros and cons of using solid-state on a PCIe storage card?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In this interview with Mark Peters, senior analyst at Enterprise Strategy Group in Milford, Mass., Peters looks beyond just speed and talks about some of the price and performance benefits of PCIe-based solid-state. Listen to our podcast interview or read the edited transcript below.
With the obvious speed assets of having solid-state storage directly tied to the PCIe bus, is it worth switching entirely to PCIe, at least for your hot I/O needs?
I think the fun, glib answer is, "So, hey, if you have buckets of money and no constraints, then of course you'd want to do that." Let me give you a more intelligent answer. Solid-state is not one thing, and I think all too often we view it as such. We already have solid-state drives, and now we're getting people talking about putting PCIe into the server. We've gotten used to a hierarchy of spinning drives, [and] we will have a hierarchy of solid-state too, which only makes sense.
So if everything were the same price, then we'd put everything as high up the hierarchy as we possibly could, maybe higher than PCIe. Maybe we'd go straight to DRAM. But there are a range of prices with solid-state, and a range of performance. So we'll end up with a range of solid-state.
So I don't think it's worth switching entirely to PCIe right now. It has its place, and people will find that place based on price/performance ratio, as they've always done.
For existing server environments in the short-term, do you expect the adoption of PCIe cards added to existing servers, or do you see folks waiting for more server-side flash appliance options before adopting the technology?
I think that, again, the dollars play into the equation [and] people will decide what to put where. But the challenge … has been with some of the current implementation -- the lack of shareability.
I think they'll wait for more shareable options; there are also a number of companies out there talking about software that provides this without buying a complete appliance.
So you get that more flexible, malleable, shareable use of what is, after all, still an expensive resource. That, I think, will determine the rate that people will move into these various areas.
You mentioned shareability -- what are people looking for specifically in that regard?
Well, if you have [in particular] a virtualized environment, which is more the norm these days, the last thing you want is to buy an expensive resource like solid-state and have that trapped in a physical server. If that server needs to move or whether it be physical or virtual, if it's linked to one and can only be used by one, then that's a problem when you need to migrate data or move [it] off to a different physical server. It's that sort of flexibility that people are looking for.
Obviously, the shortcomings of all solid-state storage apply to PCIe. It's great for reads, but every write operation slowly wears out the storage device itself. With SSDs in the hard disk form factor, OSes and vendor software are available to lengthen the useful lives of the drives. What's out there for PCIe? Is it substantially the same as for solid-state drives?
It's essentially the same thing. There's firmware to increase the longevity of the media. You need to manage that. You see various terms -- some people talk about write amplification, some people talk about read and write amplification -- but life amplification is what you're really looking for, [and] getting more life out of the media as a whole.
By the way, there's one other thing [that] is also interesting: One of the advantages of solid-state is you typically know when it's going to die. It dies a sort of "elegant death" when it's reached a certain number of write cycles.
You can plan around that. System manufacturers -- whether it's a drive or a card [manufacturer] -- also deal with that. It's not just a matter of worrying about its longevity; the fact that it has a particular number attached to it … ought to make its management easier. I don't think it's something that people need to be too concerned about.
What is the biggest shortcoming with PCIe as of now? Is it the limitations on connecting with existing servers? (One issue with SSD replacing HDDs is that the eSATA connections aren't necessarily able to take full advantage of solid-state speed). Do similar problems exist for PCIe?
It's pretty much a moot point: Much of the same pros and cons apply to not only other forms of solid-state, but ironically, let's remember that we've[forgotten] all the things we've done with spinning drives to get over all the problems they have. That's why we got to RAID, that's why we have short stroking, [and] that's why we do many other things with hard drives to get around all the problems they have.
It's just that we're used to them and think that's the norm. So we tend to look at the downsides of all these categories. I don't think PCIe -- to get to the point of this discussion -- has anything different to worry about.
One other thing I think [is] very important [and] I'd like to get across [is that] with all these -- whether it's PCIe or wrapped in the storage -- speed is great … but speed isn't an end unto itself. It can be a means to an end.
Because whether you have it in the server or in the storage, because it takes some strain off the rest of the system, and because it's so good at utilization of performance, [PCIe SSDs] can actually allow you to have an improved overall price/performance ratio for your entire infrastructure.