With fast servers and high-performance flash storage, will performance bottleneck just shift out to the networks...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
They certainly can. We have been doing a lot of testing with SSDs in our lab, and we have begun to see interesting things happen when you put SSDs, especially in large quantities, in a system. We have seen performance bottlenecks move away from storage now, which is the first time I have ever seen this. In some cases, the bottleneck moves to the network, and in others, CPU [utilization] goes way up.
People need to ask, 'How is my network architected? Is 1 Gigabit Ethernet going to be enough?' Not really. I have been saying for a while now that SSDs and 10 Gigabit Ethernet, SSDs and faster speeds of Fibre Channel, go very well together.
Some of our testing has shown that when you deploy SSDs, CPU utilization can go way up, like 50% for one VM. How many of those can you put in a box? You can get two. People are going to have to rethink both their networking and their CPU physical-to-virtual ratios because of SSDs.
Dig Deeper on SSD array implementations
Related Q&A from Dennis Martin
Remote Direct Memory Access is a good way to reduce latency in flash environments and works with InfiniBand and some Ethernet connections.continue reading
Dennis Martin of Demartek discusses creating DIY hybrid SSD arrays by adding flash drives to an existing array.continue reading
Dennis Martin of Demartek discusses whether NAND flash wear-out is still a concern in this Expert Answer.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.