With more organizations opting for a DIY approach when it comes to installing solid-state drives, here are a half-dozen helpful tips to consider when debating whether to take such an approach with SSD installation in an enterprise environment.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Determine which applications/workloads benefit most from SSD installation
Latency-sensitive applications with random-access patterns benefit the most from performance-boosting SSDs, and prime flash candidates include online transaction processing, email and virtual desktop infrastructure, according to Tony Palmer, senior lab analyst at Enterprise Strategy Group Inc. in Milford, Mass.
"As organizations virtualize and host more applications on fewer servers, the I/O workload begins quickly to look much more random, so a small to midsize business with Exchange, SQL Server, SharePoint and other applications all sitting on one or two servers might benefit from SSD," Palmer wrote in an email.
Dennis Martin, founder and president of Arvada, Colo.-based consulting and testing firm Demartek LLC, said hard disk drives (HDDs) are fine with sequential reads and writes, whereas SSDs do well with random I/O patterns, including database updates and online analytical processing. He noted that Demartek tested email servers with SSDs and found the performance was significantly better than even the best hard drives could deliver. He said those that take the do-it-yourself approach could put an SSD in an email server either as the boot or the storage drive as long as they buy good-quality drives with adequate capacity for all the email.
Gartner Inc.'s principal research analyst Sergis Mushell recommended that IT shops choose applications or workloads that aren't mission-critical. He suggested putting metadata on SSDs to accelerate searches, or running highly read websites or pages with popular videos on SSDs.
"While you get acceleration, your risk is minimal," Mushell said. "But as soon as you're getting into the primary storage mode and you're endeavoring into 'do it yourself,' you could be dealing with environments which could be very highly risky for you."
Research the major types of SSDs and form factors
One of the limitations of SSDs is the wear-out factor. Bits in a NAND flash block must be erased before data can be programmed or written, and the program/erase process eventually breaks down the oxide layer that traps the electrons, causing NAND flash to wear out.
Wear-out projections differ for the three main types of NAND flash drives currently in use in enterprise scenarios -- single-level cell (SLC), multi-level cell (MLC) and enterprise multi-level cell (eMLC). The traditionally cited figure is 100,000 program/erase (P/E) cycles (which are also known as "write/erase cycles" or "endurance cycles") for SLC; about 30,000 for eMLC; and 10,000 or considerably less for MLC.
Storage and server manufacturers initially favored SLC flash for enterprise use but began to incorporate less-expensive MLC and eMLC after drive-makers found ways to improve their reliability through smarter algorithms for wear leveling and error correction, overprovisioning, and other mechanisms of increasing sophistication. MLC can store two or more bits per cell and affords greater capacity than SLC drives.
"Almost everybody's going MLC today," said Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore. "Very few people go SLC unless you're in the high-performance compute space."
More expensive SLC flash might be necessary in high-write scenarios since it features better performance, reliability and endurance. Cheaper, slower MLC is generally best suited for read-intensive workloads that have limited write needs, such as Web content hosting, video streaming and booting drives in servers. The middle-ground option is eMLC.
"MLC and eMLC are the most cost-effective, but you have to consider the write cliff," Palmer wrote. "If you are deploying only one device, SLC might be a better choice, although more pricey."
Martin said he is comfortable using consumer-grade MLC for server boot drives because the drives don't get a lot of writes. Demartek tends to go with eMLC or SLC in servers with enterprise application data.
Gartner's Mushell added that the "magic" with solid-state storage is in the wear leveling and the data integrity, and with about 100 different providers in the market, customers need to do a careful evaluation of the product manufacturers.
Several form factor options are available for solid-state storage, but users will likely find themselves choosing between SAS- and SATA-based SSDs that fit into HDD slots or PCI Express (PCIe) flash cards that connect directly to the PCIe bus.
One of the main advantages of directly connected PCIe cards is that they bypass the traditional storage protocol overhead for lower latency. But, Staimer claimed DIY users may need greater skill to use PCIe cards than SAS-based SSDs in the HDD form factor.
Staimer said SSDs in the HDD form factor are "a much less risky play than the PCIe play because you're connecting to a SAS controller that's already in the system." On the other hand, users will have lower performance because they're limited by the SAS controller.
Select the optimal location for solid-state storage
High-end arrays from well-known manufacturers aren't the ideal place to tinker with SSDs purchased on the open market. That's because uncertified drives could have an impact on the warranty and possibly even the system operation.
For those that DIY, Staimer advised SSD installation on desktops, laptops and servers, in that order. "Just a bunch of disks"-- several disks in a chassis that connect to a server -- are also good candidates. "Storage [arrays], not so much. Anything that has a brand name it, you'll void the warranty if you open it up."
Options for sharing embedded-server SSDs and PCIe flash cards between multiple servers include Sanbolic Inc.'s Melio software and QLogic Corp.'s Mt. Rainier host bus adapter (HBA) technology, which is due in 2013. But, such products tack on costs for DIYers.
Don't rule out flash cache
When it comes to installing SSDs, DIYers may be inclined to favor SSDs for primary storage, but Martin said they shouldn't rule out flash cache. He noted that caching is a simple addition that requires no application or storage changes and provides a significant performance boost.
Caching does, however, require software sold through an SSD vendor, storage/server vendor or a separate software company. Options include server-based products from Fusion-io Inc., LSI Corp., OCZ Technology Group Inc., SanDisk Corp. and VeloBit Inc.; and software from EMC Corp., NetApp Inc. and smaller vendors. Also, QLogic's upcoming Mt. Rainier HBA technology aims to allow sharing of cached data among multiple servers equipped with its PCIe cards or SAS-based SSDs in environments that use SAN storage.
Caching software typically determines the most frequently accessed data and shifts a copy to the flash cache. Flash cache products tend to use PCIe cards connected directly to the CPU and system memory rather than SAS- or SATA-based SSDs. Server-based flash cache options reduce the latency associated with the network hop.
Check warranty and support agreements with your server/storage vendor
Vendors of name-brand storage arrays and servers may have tested and certified their products with only select SSDs and PCIe cards, so enterprise IT shops need to check support contracts and warranties before installing SSDs to see if drives purchased on the open market will affect their agreements.
"If you're talking about a big name-brand storage system, you can't just go and swap out the drives. You need to get the right drive," Martin said. But the IT shop may be able to buy the drives from a secondary source rather than from the server or storage vendor.
Staimer said potential DIYers need to be careful. "Many server vendors will say, 'If it doesn't come from us, your warranty will be voided,' and it will invalidate your service contracts, too," he said. "They'll fix [a problem], but it's coming out of your pocket completely."
Drives fail, whether they're HDDs, SSDs or PCIe flash cards, so IT shops that take the DIY approach need to buy spares. And because the SSDs and PCIe flash cards are more expensive than HDDs, they might want to check a number of data points to help determine the number of spares to keep on hand.
Martin advised looking at the lengths of warranties with the expectation that enterprise SSDs carry longer guarantees than consumer-grade products. He further suggested looking for manufacturer-supplied figures such as terabytes written (TBW).
"Although the SSD vendors provide that data, very few end users have done that calculation on their hard drives, so they don't really have any point of comparison," Martin said. "With hard drives, you don't really think about terabytes written per day, so most people don't know what a good number would be."
TBW represents the maximum number of terabytes that a host can write to an SSD using a specified workload and application class (client or enterprise). The JEDEC Solid State Technology Association, formerly known as the Joint Electron Devices Engineering Council (JEDEC), offers guidelines for determining TBW, which is also known as an "endurance rating," to allow comparison between different SSDs and vendors through a standard mechanism.