Solid state data storage (SSDS) is emerging as an invaluable technology to a growing number of enterprise storage customers. An alternative to traditional tape backup, solid state storage technology uses a solid state disk, which is said to multiply server performance and scalability in applications ranging from e-mail and wireless messaging to e-transaction databases. As a shared hot-file cache for storage area networks (SAN), SSDS can prove cost-effective for an even wider range of applications in the Internet and e-business infrastructure.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
SearchStorage talked to Michael Casey, vice president of marketing for Solid Data Systems, Inc. and former Gartner Group storage analyst, about SSDS and why he thinks this technology is going to give tape backup a run for its money.
SEARCHSTORAGE.COM:What is solid state storage?
CASEY:Solid state storage is storage based on random access memory (RAM) rather than magnetic media. Depending on the design of the system, it can be either volatile or non-volatile. Volatile systems may lose data if a power loss occurs. Non-volatile systems retain data even if power is removed. Solid state storage contains an embedded SCSI or Fibre Channel controller that interfaces to the server operating system. It stores data in random access memory (DRAM) during write operations and retrieves it during read operations. Since DRAM alone is volatile if power is lost, commercial level solid state storage systems contain built-in battery backup systems to make them non-volatile. Some also have provisions to write data to a standard hard drive for indefinite safekeeping until power is restored. Solid state storage provides the capability to read/write data without any moving parts. This eliminates I/O delays inherent with rotating magnetic disks where small movable heads must be "seeked" over the area where the data is located prior to read/write.
By eliminating these I/O delays, servers are freed up to do more computational work per second. In highly transaction-oriented systems, this can effectively multiply the transaction processing capacity on a per server basis. Companies using solid state disk technology have reported multiplying per server transaction capacity by as much as eight times. Solid state storage is about 500 times faster than rotating disks. It is highly reliable due to the lack of moving parts.
SEARCHSTORAGE.COM:What storage technologies does it compete with?
CASEY:Server-based memory, block level cache built into RAID systems.
Server memory is contained within the server itself and attached to the CPU by a high-speed internal bus. While modestly faster than solid state storage, it is less scalable, is not persistent and it's not sharable with other servers unless complex clustering technology is employed. Using server memory requires that the application program be specifically designed to do so. If multiple applications use large amounts of server memory, the system can bog down and processing slow as contentions arise.
RAID systems usually incorporate some form of block level cache. Smaller RAID arrays will use around 2G bytes, larger arrays from companies like EMC and Hitachi can go up to 16G bytes. Block cache is controlled by a CPU built into the array controller board. The CPU uses vendor designed algorithms to try to "guess" which blocks to leave in cache and which blocks to flush out of the cache. When a requested block is in the cache it's called a "hit."
Managing the block cache requires CPU overhead in the RAID controller, this adds to the time it takes to read/write data. If the cache is used to create a RAM disk (similar to solid state storage), this locks some cache memory (read a definition of cache memory) out of general use and detracts from the overall performance of the rotating disks. Block cache is not scalable separately from the RAID array. The array controller frame has a maximum amount of block cache that it can hold.
SEARCHSTORAGE.COM:What is DRAM? SRAM? What's the difference?
DRAM is essentially the simplest semiconductor storage device. Writing a DRAM location involves charging the capacitor to the desired state, either "1" or "0". The capacitor is very small, and even though the gate leakage is miniscule the data will eventually leak away unless the DRAM is periodically refreshed.
SRAM is a second-generation semiconductor storage device using a cross couple latch for storage. SRAMs are generally a bit faster than DRAMs and easier to use because no refresh is necessary.
SEARCHSTORAGE.COM:Is it cost-efficient?
CASEY:Solid state storage is very cost-effective for I/O bound systems. Due to its standard SCSI or Fibre interface, it's a plug and play solution that takes only minutes to install, format and begin using. On a dollar per megabyte basis, it seems expensive relative to standard RAID storage. But when compared to the cost of additional servers that would be necessary to achieve the same transaction per-second rate when attached to standard RAID storage, it's a fraction of the cost. Depending on performance multiple gains, it may cost as little as 1/4 or 1/8 as much as the additional servers.
Another cost savings is the labor costs required to continuously tune and modify or even re-engineer an application program. By implementing solid state storage, companies have saved weeks and even months of expensive, highly-skilled labor costs.
SEARCHSTORAGE.COM:What will this do for an end user? How will it benefit them in their daily operations?
CASEY:Solid state storage removes I/O bottlenecks in highly transaction oriented environments. This allows servers to perform more effectively rather than sitting idle while they wait for I/O functions to complete on standard rotating disks. For an end user (an IS professional) this equates to less time spent tuning systems and more effective transaction capacity at a lower cost.
SEARCHSTORAGE.COM:What applications and file types can be stored in solid state?
CASEY:Solid state storage is best suited for enterprise level Internet and e-business application environments that are highly transaction oriented. While any application or file type can be stored on a solid state disk, it's most cost effective to place only the system "hot files" on solid state. Hot files are usually small files that are very read/write intensive. They typically constitute only 5 percent of the overall data space but consume more than 50 percent of the I/O. Applications such as e-mail message queues, wireless communication, e-commerce and database applications seem to benefit the most. However, any highly transaction oriented application has potential to benefit from solid state storage.
SEARCHSTORAGE.COM: Why would a user opt for this solution over say a disk storage device?
Casey:It's not really solid state versus standard RAID or JBOD disk storage. Solid state is complementary and has its place along side rotating disk storage. Solid state storage is best suited for small files that require ultra high speed, while RAID is best suited for larger, less frequently accessed files.
On Wednesday, June 14, Casey will lead an interactive chat on SearchStorage.com. Casey was a Gartner Group analyst before he joined Solid Data Systems, and he brings a fresh perspective to this discussion of SSDS in relationship to RAID, SAN, and NAS. This chat is free. Log-on at noon EDT, June 14 and go to the Chat room. As background for this chat, see Casey's white paper on Solid State File Caching.