IT shops that implement solid-state storage technology must decide whether to use it in traditional disk arrays, as cache, in appliances or in servers.
Application needs generally determine the solid-state storage choice that will bring the greatest performance boost. Types of I/O-intensive applications that tend to benefit from solid-state storage technology include database, data warehouse, data mining, analytics and Web serving.
If the I/O bottleneck is isolated to a single server or application, server-based solid-state storage might be the best approach, whether that's with 2.5-inch or 3.5-inch solid-state drives (SSDs), PCI Express cards or dual in-line memory modules (DIMMs).
An IT shop with data sets that are intermittently hot might select NAND flash cache, in which the system typically determines the hottest data to accelerate.
If an IT shop has several I/O-intensive applications that need a performance boost, it might opt for SSDs in a shared storage array. A solid-state appliance or solid-state-only array is another option, when an IT shop wants to isolate the data to a single device.
The notion of a solid-state appliance dates back to the earliest dynamic random access memory (DRAM) systems from Texas Memory Systems Inc., which now also makes NAND flash-based products. Framingham, Mass.-based IDC continues to track solid-state-only appliances from Texas Memory and other vendors, including Dataram Corp., Nimbus Data Systems Inc., Violin Memory Inc. and Whiptail Technologies Inc. But, some vendors, analysts and users prefer to call the appliances solid-state-only arrays or dedicated solid-state storage devices.
Read on for case studies focusing on each of the solid-state storage options, with an eye toward the decision-making process.
Background: The private company that operates the public transportation network for the city of Orleans, France -- Société d'Exploitation pour les Transports de l'Agglomération Orléanaise (SETAO) -- replaced its NetApp Inc. storage with Pillar Data Systems Inc.'s Pillar Axiom disk arrays about three years ago and began using SSDs last year.
SETAO manages and stores data from buses and trams, vehicle radios, video surveillance cameras, traffic lights, billing systems and electrical systems. The company makes available real-time traffic information via mobile devices and surveillance data to law enforcement.
Technology: At Pillar's suggestion, SETAO purchased its first solid-state drive enclosure in July 2009. The company now has 600 GB of SSDs in each of its three Pillar arrays: an Axiom 500 that also has 100 TB of SATA disks, an Axiom 500 with 16 TB of SATA and an Axiom 600 with 16 TB of SATA. Two arrays are located at the primary site in Orleans; another is approximately 12 miles away.
SETAO also upgraded its servers and storage network with cutting-edge technology. The company runs Fibre Channel over Ethernet (FCoE) between its servers (which are equipped with Emulex Corp. converged network adapters, or CNAs) and Cisco Systems Inc. Nexus 5000 top-of-rack switches, which split the 10 Gigabit Ethernet (10 GbE) and Fibre Channel (FC) traffic. The storage traffic connects over 4 Gbps Fibre Channel to Brocade 300 FC switches and to the Pillar Axiom arrays.
SETAO uses FalconStor Software Inc.'s IPStor storage virtualization technology to replicate between the arrays. The company also used IPStor to migrate data from the NetApp systems to Pillar arrays.
Why SSDs in arrays: Olivier Parcollet, director of systems information at SETAO, prefers SSDs in a shared storage environment because he wants to improve the performance of several applications, some Windows-based and other Linux-based.
Using solid-state storage technology in a server would have restricted the performance boost to a single application, unless he used virtual servers. Parcollet said he isn't comfortable using SSDs in a physical server with virtual machines (VMs) because of the risk of application loss in the event of a server failure.
"Because I have shareable storage on Fibre Channel, if I lose a server, an application could run on another one very, very quickly," he said.
Results/benefits: SETAO uses SSDs with four of its most important applications. Its initial use was for the traffic simulation software that plots bus and tram routes, as well as the optimal number of vehicles and drivers. Application response time was approximately two hours on SATA disks, but it's nearly instantaneous on SSDs, allowing SETAO to run a greater number of simulations per day, according to Parcollet.
"We use three buses and seven drivers less than the year before to do the same work," said Parcollet, noting that SETAO's financial team claimed the one-year savings amounted to approximately 1 million Euros ($1.39 million USD).
SETAO's VMware Inc. virtual desktop infrastructure (VDI) also benefited from SSDs. Provisioning/booting 200 virtual desktops took about 20 minutes with SATA drives and takes about five seconds with SSDs, Parcollet said.
Results were similar for queries to the Oracle databases that store metadata about video images (which are archived on SATA disks) from 300 municipal surveillance cameras installed throughout the metropolitan transportation network. A search for a particular image, such as men wearing blue trousers and a red hat, might have taken 30 minutes with SATA drives. The search completes instantly with SSDs, he said.
More recently, SETAO shifted approximately 100 GB of financial data from SATA disks to solid-state drives. Processing that once took three hours; now it finishes in about two minutes, according to Parcollet.
Greatest challenge with SSDs: Implementing SSDs wasn't especially difficult for SETAO. The staff installed the SSD enclosure, adjusted the graphical user interface and changed the LUN's quality of service (QoS) to premium. Shifting to premium QoS triggered the Pillar Axiom array to automatically move the designated data from SATA disks to SSDs.
The greater challenge was deciding which application data to prioritize onto SSDs. Parcollet had no interest in solid-state storage technology with automatic tiering to shift the hottest data to the SSDs. Auto-tiering could potentially put unimportant data onto the SSDs, he reasoned. He wanted to make the application decisions himself.
Parcollet consulted Pillar's built-in monitoring tools to determine the most I/O-intensive applications, but he didn't move several applications to SSDs at the same time, nor did he shift entire applications.
"Only some parts of the applications need to be on SSD," Parcollet said. "All the data doesn't need to stay in SSDs, only the more accessed [data does]."
For instance, only the control files, indexes and "redo" logs of SETAO's Oracle databases make use of SSDs. With VDI, SETAO stores only the gold image on SSDs and spreads the end-user data across SATA drives.
"One VM per user consumes only about five I/O per second," Parcollet said. "There's no need to use SSD every time for VDI. But SSD is good to generate the images very, very quickly for provisioning."
Peer advice: Parcollet recommends SSDs for small, high transaction, I/O-intensive applications rather than large applications. "We cannot install all applications on SSD because it's very, very expensive," he said, noting the company's SSDs cost approximately five times more than its SATA disks. Pillar's list price for a "brick" with 64 GB SSD drives (12 active drives, one hot spare) is $49,000.
Parcollet cautioned that all storage features may not be available when using SSDs. He said he can't use Pillar's thin provisioning with SSDs, for instance.
Addressing another potential downside of SSDs, Parcollet said he's not worried about the drives wearing out. "I asked Pillar the question when I bought the SSD drives, and they guaranteed that the [SSD] life will be as long as a traditional drive because there's a [memory] reserve on each drive," he said.
Background: Ultimate Software Group Inc. in Weston, Fla., provides human resource and payroll Software as a Service (SaaS) to more than 2,000 customers. A 200-member development team writes and tests an average of 21 application iterations, known as application builds, per week.
Technology: In June 2009, Ultimate Software Group purchased two DRAM-based 16 TB performance acceleration module (PAM) cards from NetApp. The PAM cards functioned as read caches for the company's pair of clustered NetApp FAS3170s, which store data from Microsoft Corp. SQL Server databases, VMware VMs and file shares, and serve as the central repository for the daily application builds.
This year, Ultimate Software Group bought two of NetApp's newer 512 GB Flash Cache (PAM II) cards for the FAS3170s and moved the lower-capacity DRAM-based PAM cards to the pair of FAS3140s that IT uses for performance, stability and reliability (PSR) testing.
Why choose solid-state cache over SSDs: "It was a lot cheaper than buying SSDs for 30 TB of storage," said Brian Goldberg, director of infrastructure and deployment strategy at Ultimate Software Group.
Results/benefits: The PAM cards store in cache memory the application builds that developers request most often, and read speeds have increased dramatically in response, according to Goldberg.
"We write [the application build] once, and then we read it many times, which is why the PAM cards were very attractive," Goldberg said. "Instead of the filer going down the loop to get the actual data from the physical disks, bringing it back and then sending the response to the user, it basically goes to the cache, gets it and sends it right to the user a lot faster."
Real-time performance monitoring showed IOPS was far lower with the PAM cards in place. The load on the two NetApp FAS3170s, which store more than 37 TB of data, has decreased 40% to 50% since the installation of the DRAM-based PAM cards, Goldberg said.
Adding the new Flash Cache helped the developer team increase the number of application builds per week with no impact to performance.
"We've been growing our products and our teams and the number of environments each team owns," he said. "We knew we were going to be deploying more and more, and we had concerns that if we kept hitting the NetApp [FAS boxes], we would have performance problems and we would need a bigger filer."
Greatest challenge: Goldberg said he would like to add more PAM cards, but the cost is prohibitive. Ultimate Software Group spent close to $30,000 on its initial pair of 16 TB DRAM-based PAM cards and more than $100,000 of the second set of 512 GB Flash Cache/PAM II cards, he said.
But, he added, "We definitely felt that the value we've gotten from them is worth it."
Peer advice: "I would definitely get them from the beginning," Goldberg recommended. "I wouldn't say, 'Oh, let me set up my filer without them, and I can always add them later.' You'll definitely reap the benefits if you start using them from Day 1."
Background: Odyssey Logistics & Technology Corp., based in Danbury, Conn., provides managed logistics and services to the global chemical and process manufacturing industries. Its primary data center is located in Charlotte, N.C., and its secondary data center is in Raleigh. Odyssey supplies information such as carrier selection, rack scheduling, transit time, shipment tracking and billing to customers through SaaS-based applications.
"The thing that's hard to try to manage is how many electronic transactions we do on the back side at any given time when you have users on the front side," said Brad Massey, Odyssey's director of IT support services. "Let's say we have major retailers in the U.S. who send us batches of 4,000 or 5,000 orders that need to be planned pretty quickly. We might be load optimizing those shipments on the back end while we have people on our website trying to do regular queries. We still need to offer acceptable performance."
Technology: Approximately three years ago, Odyssey Logistics & Technology purchased Texas Memory Systems' RamSan-400, a 128 GB DRAM device; six months later, Odyssey upped the scalability with a NAND flash-based RamSan-500, a 2 TB NAND flash device. Last summer, Odyssey added a 5 TB RamSan-630 flash-only array to run its data warehouse and analytics.
"All of our customers see very consistent performance because of the solid-state arrays on the database," Massey said. "Prior to that, we always seemed to be playing catch-up with adding spindles to the storage array so we could keep our database performance up to speed."
Odyssey reserves its RamSans almost exclusively for its Oracle workloads, running the entire databases on the RamSans. All the company's custom-built and packaged applications rely on the Oracle data stores, from the accounting system to the IBM WebSphere partner gateway.
"That's where we really need the throughput," Massey said. "Our database requires the ability to burst our I/O very quickly, for sometimes long or short periods of time. Whether you're looking at SSD or disk-based systems, you've got to size the systems to your peak I/O whether or not you're going to use it all the time."
In addition to the RamSans, Odyssey Logistics & Technology recently purchased five 100 GB flash drives for one of its EMC Clariion CX4 arrays. One drive will serve as a hot spare and another for parity, leaving approximately 300 GB usable. The most likely use case for the new SSDs will be a VDI project.
"If you have a lot of VMs booting up at the same time in a first-of-the-morning scenario, you can create an I/O storm," Massey noted. "You really need your golden images to be pulled from very quickly."
Why solid-state-only array/appliance: Odyssey doesn't own its data centers; it operates at colocation facilities. So, energy-efficient, space-saving solid-state appliances hold extra cost-saving appeal over traditional disk arrays.
"When you look at a RamSan device and the amount of I/O that they're able to pack in a 3U device, as opposed to all of the disk enclosures and the spinning disks you would have to have to get for the same amount of IOPS," Massey said, "it's really a compelling story."
Server-based storage doesn't factor into Odyssey's long-term plans. Odyssey Logistics & Technology runs Cisco Unified Computing System (UCS) diskless servers. "Most of our configurations at the data center run boot from SAN, so we typically eliminate all of the disks out of servers where we can," Massey said.
Results/benefits: Waits of eight to 10 seconds on Web page refreshes, and occasional response times as high as 30 seconds under especially heavy load, dropped to sub-second times for most queries with the shift from hard disk drives to solid-state storage, according to Massey.
DRAM-based devices tend to work better than flash at writes; the flash works fine for reads, noted Eric Brown, a database administrator at Odyssey.
"We have an extremely high read-only environment," Massey added. "If our profile was heavy write, we would certainly make different decisions."
Odyssey used to refresh its data warehouse only periodically throughout the day, but with the RamSans, it's able to crunch much of the data in real-time for customers accessing its Web dashboards.
Peer advice: Massey recommends IT shops consider solid-state drives where they need optimal performance. He also urged them to factor in space and power requirements when comparing the acquisition cost of traditional disk-based storage arrays and solid-state storage technology.