We have typically thought tiered storage happens within an array. Now that SSDs are showing up as server cache, how is that managed? It seems that if anything went wrong in the server, you could lose important data. Also, is there a way to calculate how much power you would save using SSD -- instead of HDD -- in an array? Does this help defray the cost of SSDs at all? Or will there not be enough power savings for it to matter?
PCIe flash is definitely appearing in servers. It can appear as internal storage or secondary cache (or, in the case of Fusion-io, primary cache as an extension of NVRAM). For SSDs to appear as cache, there must be caching software that runs in the server. The reason most SSDs have some type of caching software is that PCIe flash internal storage cannot be effectively used with virtualized servers unless they appear as cache with the data moved or copied out to external storage. Functions such as vMotion or Live Migration do not work without external storage.
Caching software has a secondary purpose as well. It makes sure that SSD data is protected by placing a copy into external storage where storage data protection functions can maintain it in the event of a data or hardware failure on the server.
As to the second part of the question, there is a way to calculate power cost savings, a non-trivial difference. Calculating the difference requires knowledge of the HDD and SSD power utilization and heat generation. Convert that to power consumption and cooling power consumption. Add the two and multiply by the cost per kilowatt per hour.
Or, use the general rule of thumb that flash SSDs use approximately one-third to at most half of the power and cooling required for equivalent-capacity HDDs. The key is equivalent capacity. At 14 cents per kWh, the U.S. average cost of power, this adds up very quickly.
This was first published in October 2012