How to create Tier 0 storage by leveraging solid-state drive technology

With prices of solid-state drive (SSD) technology dropping, companies are starting to use SSD technology in a high-performance tier of storage called Tier 0.

Companies are starting to use solid-state drive (SSD) technology in a high-performance tier of storage called Tier 0.

Tier 1 storage, also known as production storage, can be considered the first class cabin for production data. Tier 2 and lower storage tiers were developed to handle data that is not quite as critical or does not need the performance characteristics of Tier 1 storage.

Now there's a new tier of storage: Tier 0. Tier 0 is solid-state memory-based storage which is used to improve performance beyond what current Tier 1 storage can offer. In the past, Tier 0 storage has been in the form of a RAM disk and was quite expensive. In fact, to justify the high cost of RAM disk, you had to not only know for certain that your performance problems were storage-based, but also be able to show a return on your investment in RAM disk.

Today, however, this is changing. The dropping cost of solid-state devices is making SSD technology more accessible throughout the data center. But while these cost reductions in SSDs are broadening the technology's appeal, the primary consideration for when a company chooses a SSD option is still performance.

 The first step in establishing a Tier 0 is identifying the data that should go on the system.
,

Matching the performance of a 4U SSD would take a huge and expensive disk array with a large disk LUN striped across many drives. As always, simplicity wins out. Given the choice between a simple 4U SSD or a large disk array with a complex drive setup, many customers are choosing SSDs.

SSDs come in two forms: RAM-based systems and flash memory-based systems. Flash memory is what is changing the SSD landscape. Although flash does not have the performance of a RAM-based system, flash is significantly faster than traditional disk-based arrays -- even the top-performing arrays -- making it, for some data centers, the perfect solution.

RAM-based systems are more expensive than flash. For instance, a common capacity purchasing point today for flash-based SSDs is 2 TB. That 2 TB of flash memory would typically list for about $190,000. A common RAM-based capacity purchase is 128 GB and that would list for about $70,000. If a RAM-based SSD were purchased in 2 TB, it would come to more than $1 million.

While sales of flash-based systems are now outpacing RAM-based systems (in total capacity), RAM-based SSD systems' sales are increasing on a per-unit basis as well. When you need RAM-based performance, you can usually justify the extra expenditure.

Unlike flash-based SSDs, RAM-based systems are not sensitive to the amount of data being written to them. There is a theoretical limit to the amount of writes that a flash-based system can handle. Additionally, flash-based systems do not offer the same level of write performance as RAM-based systems.

As a result, in scenarios where there are very active files with significant write I/O like those that have redo logs or Undo Segments, RAM-based systems are usually the better alternative. Database environments where redo logs or Undo Segments are choking current disk I/O capabilities are where the most significant I/O increase can be measured and the return on investment quickly realized.

How to create a Tier 0
The first step in establishing a Tier 0 is identifying the data that should go on the system. With RAM-based systems, these are applications with high write I/O transactions. In these applications, specific files can be identified as "hot," meaning that the files are so active they need more I/O than the disk subsystem can deliver.

Let's return to the situation above, where redo logs or Undo Segments from databases are placed on a RAM disk. The three most likely solutions are to upgrade to a faster (and more expensive) disk array; spread the data across more drives in the array (leaving you more vulnerable to a double drive failure); or buy an SSD. These high write I/O applications are ideal for RAM-based systems as opposed to flash memory. The other driving factor in RAM SSD installations is low latency. For many applications, latency is more important than absolute peak IOPS numbers, though the best combinations offer both low latency and high IOPS.

Data that would do well on flash-based systems is from read-intensive applications or at least those with a more normal level of writes. If the flash system has a large enough RAM cache, it can also support high bursts of writes, meaning it is suited to applications that require significant disk I/O but where individual hot files cannot be identified, such as data warehouses.

Flash-based systems offer higher capacities than RAM-based systems, as well as lower power consumption. Because of the capacities available with flash-based SSD, it is now possible to move entire databases onto a SSD.

Protecting Tier 0
How can you protect this new Tier 0? It is, after all, memory. Flash is typically sold in modules that are grouped in an array set, with one module designated as a parity drive. This effectively builds a RAID 3 protection strategy. Also, like the memory in your USB thumb drive, flash drives do not need power to maintain stored data.

But since RAM drives do require power at all times, protection becomes an overriding concern. Some RAM-based systems use battery backup and have built-in hard drives to store data in the event the system is shut down manually or by a power outage. During a power loss, the system's battery will keep the unit running and the system will copy its contents to the hard disk drive(s), in case power does not return before the battery runs out.

More on tiered storage
Tutorial: Creating a tiered SAN architecture

How to purchase a tiered storage tool

Using SAS and SATA for tiered storage

RAM-based SSDs also leverage error correcting memory (ECC) and IBM's Chipkill technology. (HP offers an equivalent system, called Chipspare.) These technologies offer a form of advanced error checking and correcting (ECC) technology that protects computer memory systems from any single memory chip failure, as well as multi-bit errors from any portion of a single memory chip.

For example, Chipkill performs this function by scattering the bits of an ECC word across multiple memory chips, such that the failure of any one memory chip will affect only one ECC bit. This allows the system to reconstruct the contents of the memory contents, despite the complete failure of one chip.

Chipkill is frequently combined with dynamic bit steering, so that if a chip fails (or exceeds a threshold of bit errors), a spare memory chip replaces the failed chip. The concept is similar to that of RAID, which protects against disk failure, except that now the concept is applied to individual memory chips. When Chipkill was developed by IBM in the 1990s, it was focused on mainframes and high-end Unix systems, but it is now being utilized in SSD. A study done by IBM on the effect of Chipkill suggests that it decreases the likelihood of data loss in a memory system by two orders of magnitude.

RAM-based systems: Are they green?
Are RAM-based systems green? On a power per TB comparison, the answer is no, but that comparison is not real-world. The traditional method of getting more performance to an application hungry for disk I/O is to create LUNs with a high drive count in them. The more drive spindles in the array group, the faster the disk I/O performance. These extra drives require more power and typically, especially in non-virtualized storage technology, there is a vast amount of wasted disk capacity, especially in non-virtualized storage environments. The user has to sacrifice effective capacity utilization for speed.

SSDs do not need extra spindles; they deliver high speed out of the box. The result is a lower number of devices and therefore a lowering of power consumption rates.

Performance expectations 
A typical hard disk drive performs 4- to 5-msec reads or writes and approximately 150-300 random I/Os per second. A RAM-based SSD does .015 msec reads and writes and about 400,000 I/Os per second. A flash-based SSD does about 0.2 msec reads and 2-msec writes. I/O performance is 100,000 random I/O per second on reads and 25,000 I/Os per second on writes.

Texas Memory Systems has developed a cached flash SSD. By leveraging a RAM-based cache, it delivers similar performance numbers to RAM-based SSD on cache hits and as a result delivers the best of both worlds.

Companies who pioneered the SSD market, such as Texas Memory Systems, are now being joined by storage array manufacturers like EMC, Sun, NetApp and Hitachi Data Systems in an attempt to address this rapidly expanding market. NetApp and HDS, for example are expected to deliver SSD solutions this year as well.

About the author: George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and executive management positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

This was first published in June 2008

Dig deeper on SSD array implementations

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchITChannel

Close