This article can also be found in the Premium Editorial Download "Storage magazine: How to improve your virtual server storage setups."
Download it now to read this article plus other related content.
NAND flash-based storage could be replaced by newer forms of non-volatile memory like MRAM technology. Find out why MRAM's density, cost and form factor could make flash obsolete.
Flash storage is everywhere these days. It's hard to have a discussion about IT infrastructure without someone talking about how flash storage can be leveraged to make server and storage architectures faster. It's not necessarily cheaper, although a large increase in workload hosting density can provide cost justification. But it will certainly deliver higher performance at key points in the I/O stack in terms of outright latency; and with clever approaches to auto-tiering, write journaling and caching, higher throughputs are within easy reach.
But flash as a non-volatile random-access memory (nvRAM) technology has its problems. For one, it wears out. The most common type of flash is based on NAND transistors like static RAM (SRAM), but has an internal "insulation" layer that can hold an electric charge without external power. This is what makes it non-volatile; but writing to NAND flash requires a relatively large "charge pump" of voltage, which makes it slower than RAM and eventually wears it out. Perversely, wear-leveling algorithms designed to spread the damage evenly tend to increase overall write amplification, which in turn causes more total wear. And looking forward, because of the physics involved, flash is inherently constrained as to how much it can eventually shrink and how dense it can get.
A flashier solid-state
Flash is constrained in terms of density, power and performance compared to active DRAM. This currently isn't a problem, but as we continue to discover ways to creatively leverage flash to accelerate I/O, flash will ultimately give way to newer types of non-volatile memory that aren't as limited. Perhaps the most promising technology today is a type of nvRAM based on magnetoresistance. Magnetoresistive random-access memory (MRAM) stores information as a magnetic orientation rather than as an electrical charge. This immediately provides a much higher read and write performance that is much closer to DRAM speeds than flash because bits are read by testing with voltage, not current, and written with a small current boost, not a huge charge. (Current DRAM latency is less than 10 nanoseconds [nsec]; MRAM is currently around 50nsec, and flash is much slower at 20 microseconds to 200 microseconds depending on read or write.) Because there's no charge pump, MRAM doesn't wear out like flash does.
Varieties of MRAM technology
There are two main generations of MRAM technology available. The first is "toggle" MRAM in which writing relies on creating a localized magnetic field with crossed wires. One can immediately imagine that toggle MRAM is limited in density by the constraints of preventing the write magnetic field from affecting neighboring bits.
A new generation of MRAM technology is based on the principle of spin-torque transfer, which flips a magnetic bit by passing a write current through a permanent magnet layer first to give all the electrons an aligned spin. As those polarized electrons run into and pass through the magnetic bit layer, they torque its magnetic field to effectively flip it. As there's no external magnetic field, spin-torque MRAM (ST-MRAM) circuitry could eventually be shrunk to DRAM-like densities.
There are other non-volatile RAM technologies in development and production, including ferroelectric RAM (FRAM) that uses a ferromagnetic "liquid" layer and parameter RAM (PRAM), which uses physical material phase changes induced by rapid heating and cooling to store bits. It's too early to bet the farm on any one technology, but the current consensus is that MRAM has the brightest future in terms of offering the best power, density, materials and overall cost potential.
MRAM is already in use
Toggle MRAM is already in wider use than you might suspect. For example, Dell EqualLogic and LSI both use MRAM technology in their storage RAID controllers. The market for ST-MRAM is just getting organized. Everspin Technologies recently announced a 64 Mb DDR3 form factor ST-MRAM chip; Crocus Technology is developing a thermal-assisted technology applied with spin-torque; Spin Technologies has an orthogonal MRAM approach promising high densities; and Micron Technology, Qualcomm and Toshiba have all invested in MRAM.
The current industry investment in flash is huge, and demand for flash is still white hot. With such market momentum, vendor production will remain focused on flash for the next couple of years. But new types of non-volatile RAM will inevitably take over. Hopefully, the architectures and approaches we see being developed and produced today to leverage flash can evolve to accommodate even faster, lower power nvRAM generations. Unfortunately, in some cases we see vendors acting as if NAND flash is the end-all be-all, and we think that's a recipe for obsolescence.
More in store for MRAM
MRAM has the potential to come to market in the next couple of years in a density, cost and form factor that could obsolete flash solutions overnight. Current flash memory ecosystem players should stay on top of this emerging technology so they're ready with a new generation of offerings as MRAM evolves.
Perhaps the most significant challenge won't be limited to flash-specific solutions. When nvRAM becomes as fast as DRAM, and in the case of MRAM can also be built into the same silicon as compute, whole new ways of architecting both servers and storage will emerge. Imagine persistent storage that is potentially as fast as active compute memory. Ultimately, a few years out, we can foresee a total chip-level convergence of storage and compute. If data persistence can be built directly into compute nodes, we'll see some terrific gains in green computing and an explosion in more fabric-like distributed computing. Today's big data and virtualization cluster offerings may provide some clues to the possible future data center, with compute mapped out locally to storage on pluggable, virtualized scale-out infrastructure.
If you're just now thinking about how to reengineer or re-architect to take advantage of flash, it would be worth some effort to look out a couple of years and think about what solid-state storage might look like after flash.
About the author:
Mike Matchett is a senior analyst and consultant at Taneja Group.
This was first published in May 2013