We have typically thought tiered storage happens within an array. Now that SSDs are showing up as server cache,...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
how is that managed? It seems that if anything went wrong in the server, you could lose important data. Also, is there a way to calculate how much power you would save using SSD -- instead of HDD -- in an array? Does this help defray the cost of SSDs at all? Or will there not be enough power savings for it to matter?
PCIe flash is definitely appearing in servers. It can appear as internal storage or secondary cache (or, in the case of Fusion-io, primary cache as an extension of NVRAM). For SSDs to appear as cache, there must be caching software that runs in the server. The reason most SSDs have some type of caching software is that PCIe flash internal storage cannot be effectively used with virtualized servers unless they appear as cache with the data moved or copied out to external storage. Functions such as vMotion or Live Migration do not work without external storage.
Caching software has a secondary purpose as well. It makes sure that SSD data is protected by placing a copy into external storage where storage data protection functions can maintain it in the event of a data or hardware failure on the server.
As to the second part of the question, there is a way to calculate power cost savings, a non-trivial difference. Calculating the difference requires knowledge of the HDD and SSD power utilization and heat generation. Convert that to power consumption and cooling power consumption. Add the two and multiply by the cost per kilowatt per hour.
Or, use the general rule of thumb that flash SSDs use approximately one-third to at most half of the power and cooling required for equivalent-capacity HDDs. The key is equivalent capacity. At 14 cents per kWh, the U.S. average cost of power, this adds up very quickly.
Related Q&A from Marc Staimer
Latency in object stores that stems from a large amount of metadata means the technology is better suited to non-transactional data.continue reading
Eventual consistency in object stores can be an issue because object storage is spread over many nodes and up-to-date data may not always be ...continue reading
HDD failure can put bytes of data at risk. Is multi-copy mirroring or erasure coding the more efficient data protection approach?continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.