kentoh - Fotolia

Get started Bring yourself up to speed with our introductory content.

Enterprise flash storage moves from infancy to maturity

Enterprise flash storage technology is hot for its ability to speed application performance. See what's up with flash now and where it's headed.

This article can also be found in the Premium Editorial Download: Storage magazine: The state of flash storage technology:

Many data centers have deployed flash in some form to support I/O-demanding applications. This high-performance, low-latency technology is enabling greater virtual machine densities and highly scalable databases. As enterprise flash storage technology moves from infancy to maturity; what should IT planners expect now and what can they look forward to in the near future?

Flash storage can bring dramatic improvements to almost any storage infrastructure and, as a result, it's available in a myriad of configurations. There's flash for individual servers, solutions that aggregate flash across servers and flash for shared storage systems.

We'll examine each of the above options and describe its value to an organization depending on the use case and available budget.

Single-server flash

Single-server flash typically combines either a solid-state drive (SSD) or PCI Express (PCIe)-based flash card with caching software to automatically accelerate an application's read traffic. The choice between SSD and PCIe often comes down to one of price vs. performance. SSDs are more affordable, while PCIe offers higher performance. There's also a variety of caching software available. This software is often native to the SSD or PCIe flash hardware, and is designed to support different operating systems or environments. For example, caching software targeting a VMware environment would provide equal access to the flash resources for each virtual machine (VM) on the server, where a file-level caching solution could provide specific acceleration of key database application files.

These solutions are ideal for environments where only a single server or single application needs acceleration, representing a cost effective, "surgical strike" on performance problems.

Aggregated-server flash

The other use case for server-side flash is as part of a combined flash pool. In this use case, software will aggregate the internal flash storage across a set of clustered servers and present a logical pool of storage that can be accessed by any server in that cluster. Also known as software-defined storage and/or converged storage, these solutions attempt to flatten the IT stack by combining compute, network and storage into a single operating layer.

These solutions, which are becoming popular in virtualized environments, provide a very cost-effective way to present a shared storage pool to a cluster of servers. There are some downsides though, as these systems are DIY in nature and require an IT professional to test, select and maintain the storage software, storage servers and storage media.

To counter this DIY challenge, a few vendors provide turnkey converged stacks but still leverage commodity hardware. Even though the build-it-yourself solution may sound appealing, turnkey solutions will probably see better market acceptance since most organizations simply don't have the time to do all the legwork required to create a total software-defined storage solution.

A final challenge confronting aggregated flash solutions is a level of unpredictability. It's possible a spike in VM compute demand could impact storage I/O performance or that a surge in storage I/O demand could impact VM performance.

Shared storage flash

After single-server, shared storage solutions are the most popular deployment method of flash storage. These are typically available in two forms: all-flash arrays or flash-assisted arrays, also known as hybrid arrays. The key decision point between the two is one of predictability vs. cost per gigabyte. While all-flash arrays have added capabilities such as deduplication and compression, they're still priced at a premium vs. hybrid arrays. All-flash arrays bring consistent performance to any application that uses them because there's no risk of a cache or tier miss. However, all-flash arrays treat data equally and not all data can benefit from being on flash, especially as it ages.

Hybrid arrays have a small flash tier, typically 5% or less of the system total, in front of a capacity tier of hard disk drives. Data is automatically moved back and forth between the tiers as it's created, modified and accessed. Since there's a flash tier to help with performance, hybrid systems tend to use the highest capacity hard drives available. While this makes for a more attractive cost per GB, it limits the number of drives that are available to respond to a non-cached I/O request and, as a result, can lead to unpredictable application performance.

Both types of shared flash solutions are available in scale-up or scale-out configurations, similar to shared hard drive storage systems. The typical knock against scale-up solutions is that once these systems approach their capacity or performance limits, a new system is needed. The value of a scale-out solution is that its performance and capacity can continue to be expanded through additional storage nodes. In the hard disk drive era, scale-out solutions had become the best long-term decision.

The flash era has given scale-up solutions a new lease on life, so to speak. Flash delivers such high performance that many data centers don't need more raw performance than a single system can deliver. Flash also enables the use of data efficiency technologies such as compression and deduplication, which allow the scale-up system to deliver far more capacity than its hard drive equivalent without deduplication. Hard drive systems could certainly have deduplication added to them. However, they don't have the performance to spare that flash-based systems do and so the feature isn't commonly implemented on hard drive-based primary storage systems. Finally, most flash-based scale-up systems don't suffer performance degradation the way hard drive-based systems do as they approach their capacity limits. If designed correctly, they can deliver the same performance at 90% of capacity as they can at 20%.

Don't forget about the network

Thanks to virtualization and more scalable databases, the compute layer can generate more I/O demand per server than ever. And thanks to flash, the storage infrastructure is able to respond to that I/O demand. But the storage network, as it stands today, is lagging. Networks will need to be upgraded whether the storage solution is a flash aggregation type or a more traditional shared storage version. The interconnect between devices will become more critical and need to deliver more performance.

IT planners should look for networks that not only offer the raw performance boost they need to transfer I/O between servers and storage, but a storage network that can provide bandwidth provisioning so certain functions or applications get priority. They will also need to provide visibility into network utilization -- preferably at a virtual machine level -- but this will require networking vendors to integrate with hypervisors such as VMware and Hyper-V.

Let there be DRAM

Before flash, the primary way to provide application performance improvements was by buying additional DRAM for the server the application ran on. There were also shared DRAM devices similar to the flash devices we use today. While not as affordable as flash, DRAM has become more economical over time. Today, some of the above mentioned caching and flash aggregation solutions are beginning to use DRAM as a storage tier.

The downside to DRAM is that it's volatile. If the server reboots or loses power, all data in DRAM is lost. As a result, most of these solutions use DRAM as a read caching tier for the most frequently accessed data. This is typically more efficient than just allowing the operating environment or operating system to have access to the DRAM resources.

Some of these software solutions are using DRAM as a temporary write tier, similar to a write log in a database. In this model, writes are coalesced in DRAM and then written to flash. This allows the log to be secured to non-volatile storage quickly and use flash more efficiently.

Is DRAM write caching safe?

This is a classic example of the kind of risk vs. reward decision IT planners have to make on a regular basis. If the environment is stable and has plenty of backup power, it may be worth the risk. The write performance improvement can be substantial as DRAM still has a tremendous performance advantage over flash.

Non-volatile memory express (NVMe) is on the horizon. NVMe has all the performance characteristics of DRAM, but the non-volatile nature of flash. It will still be priced at a premium compared to flash storage, but the performance advantage could be worth it in small use cases. Practical application of NVMe is three to five years away.

Next-generation flash: 3D vision

From a storage perspective, the next generation of flash will likely use a 3D layering technology. 3D NAND stacks flash cells vertically on top of each other. Doing so should increase the capacity per NAND device and improve the performance of the NAND since the cells are closer to each other. Both lower cost and continued improved performance are attractive to the enterprise. 3D NAND is much closer on the horizon than NVMe and customers could be seeing this technology within the next year or so. It's reasonable to assume that once 3D NAND flash is available in quantity another significant price drop in existing flash storage will occur.

FlashDIMM: Slow DRAM rather than fast storage

Even closer on the horizon is a technology called FlashDIMM. These flash devices look like DRAM dual-inline memory modules (DIMMs) but have flash on them instead of DRAM and install in the same memory sockets as DRAM. This means FlashDIMM can be accessed via the memory bus instead of the PCIe or SCSI bus. The memory bus is essentially a private network designed specifically for memory and has very low latency.

FlashDIMM could be used in two ways:

  • As traditional flash storage with significantly lower latencies, or with the right software it could be used as DRAM, albeit slower.
  • As a high-capacity alternative to DRAM that would be ideal for in-memory compute or high-density virtual servers.

FlashDIMM technology is available now, but server vendors need to modify their BIOS to support it. As that occurs, FlashDIMM technology could provide serious competition to DRAM.

Read more about flash storage

Six different flash SSD storage implementations

Why most storage SSD caching is write-through and not write-back

Is performance or function most important when choosing all-flash storage arrays?

Comparing all-flash arrays

Advantages and disadvantages of memory channel flash storage and PCIe-connected flash

Cache aggregation: Server and shared

The next few years will bring the continued maturing of software-defined flash aggregation solutions. Concerns about performance predictability may subside as CPU performance continues to accelerate and network speeds increase. We may also see an aggregation of server and shared storage, led by shared storage system manufacturers. These products will be able to leverage PCIe flash or SSDs in the server as an extension of their own caching or tiering software. Some storage vendors have this available today, but its implementation is fairly limited. In the future, expect all shared storage system vendors to have this capability.

The single biggest challenge to flash adoption is in understanding which of the many available options is right for your particular data center. Most data centers adopt flash first to address a specific pain point and then look for other ways to use the technology. Given that, IT planners should seek out flash solutions that can cost effectively solve the current needs of the data center and provide for expansion into future use cases.

About the author:
George Crump is president of Storage Switzerland, an IT analyst firm focused on storage and virtualization.

This was last published in September 2014

Dig Deeper on SSD array implementations

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Nice snapshot and predication for SDD tech moving ahead. Thanks!
Cancel

-ADS BY GOOGLE

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchITChannel

Close