Complete guide to backup deduplication
A comprehensive collection of articles, videos and more, hand-picked by our editors
The concept of an all-flash data center is appealing because it would eliminate time-consuming tuning exercises. It would also allow data centers to achieve maximum virtual machine density while keeping application owners happy with storage response times.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Data reduction methods such as deduplication, compression and thin provisioning, along with the general decrease in flash-per-gigabyte (GB) prices, are moving the all-flash data center from concept to reality. Few vendors deliver all three pieces of the data reduction puzzle, so it is important to know which method, if any, is best for your organization.
When considering data reduction to make flash more affordable, you need to figure out the possible performance impact. Adding any layer on top of a near-zero-latent storage medium will affect performance, but the critical question is "Will the applications or users notice the impact of that layer?" You can always lessen the performance impact with additional processing or memory.
Pick your data reduction method
For the vast majority of data centers, the overhead associated with available data reduction techniques will be virtually unnoticeable. These systems have the performance to spare that most data centers can't take advantage of, so spending a few of those cycles to drive down the cost of a flash storage system is worth it.
Thin provisioning is a sound investment for almost every environment. There is overhead in dynamically adding to a volume's capacity, but it is minimal. This technique is important because other forms of data reduction can't optimize it. That capacity is hard allocated to a given LUN and can't be shared.
Deduplication eliminates redundant segments of data across files. The deduplication payoff can be significant, especially in virtual environments where there is so much commonality between guest operating systems.
Deduplication can extract a significant performance toll, however. It creates a large amount of metadata to track unique data and pointers from what would be redundant data. Quickly traversing the metadata that deduplication requires is critical for overall system performance. While flash memory certainly helps, tracking redundancy as the system scales requires CPU power, which may raise the price of the storage system.
Compression reduces storage capacity consumption by essentially eliminating redundancy within files instead of across files. While compression does not provide the impressive 9:1 type of reduction offered by deduplication, it provides a more consistent result because it operates on all the files and does not require redundancy across files. This in-file efficiency makes compression ideal for databases and other single file information.
The inline requirement
Data reduction brings two distinct benefits to all-flash and hybrid flash storage systems:
- A reduction in the total capacity required. Many all-flash array vendors claim a price point of less than $3 per GB, and some even claim a price point below $1. The actual result will vary based on the level of efficiency realized, and each data center will be somewhat unique in how efficient these technologies will be.
- Data reduction, if done inline, should extend the life expectancy of the flash modules. The write limitations of flash modules have been well documented, and there are a predetermined number of writes they can receive.
Performing all three data reduction methods prior to data being written to flash is called inline data efficiency. For example, if you used all three data reduction methods, you would achieve a 5:1 efficiency ratio -- a reasonable result. A 5:1 efficiency ratio translates into a 500% potential reduction in write traffic, extending the life of the flash module significantly.
Which method is best depends on the use case -- most data centers are now looking to deploy flash across a wide variety of workloads. At one time or another, each data reduction method will be best for a given workload. For mixed workloads, the most efficient system is one that has all three capabilities and performs its data reductions inline. But few systems, at this point, provide all three capabilities.
For specific use cases, the answer will vary. For example, in a database environment, a system that just performs compression is adequate. If that database is on the extreme edge of demanding performance, then a system with no data reduction or the ability to turn off data reduction may be necessary. Virtual environments may be able to leverage a system that can only provide deduplication.
Data reduction alternative: Native capacity
An alternative to data reduction is native capacity. In the past, not using data reduction for a general-purpose flash array made the system too expensive. But now, thanks to new, high-density flash technologies like triple-level cell (TLC) and 3D NAND, storage systems that use them can break the $1 per GB barrier. While the durability of these technologies is an even greater concern, they could be combined with a more reliable single-level cell tier of flash to act as a shock absorber to the more write-sensitive TLC tier.
The advantages of this approach are that the data center knows exactly what the cost per GB is, there is no data reduction variable and there is no concern about performance overhead from its use.
Without a doubt, data reduction has made the concept of an all-flash data center more realistic. Each pillar of data reduction -- deduplication, compression and thin provisioning -- has value. However, these methods are most effective when flash arrays can provide all three at the same time and do so inline before data is written to the flash modules.
Data reduction guidelines for a VDI environment
Data reduction techniques for primary data storage systems
What to select for an all-flash data center