Although solid-state disks are usually used only for critical files, such as database indexes, there are times when it makes sense to put all the active data on a solid-state disk. These instances include applications with such a high volume of I/O that there is little or no time to flush caches to disk.
Solid-state disks are collections of RAM organized to look like a hard disk to the system. They are several hundred times faster in seeks and reads than a conventional hard disk, but they are also much more expensive than hard disks. The cost of solid-state disks has limited their use to rather specialized applications, particularly in databases and data logging.
In I/O bound applications, the usual advice is to use a larger cache, rather than go to a solid-state disk. This is certainly cheaper, but it doesn't always give the same level of performance. Cache works best in environments where I/O surges are followed by slower periods when the information in the cache can be transferred to hard disks. However, not all applications follow this pattern, especially as transaction processing system become more sophisticated and more heavily loaded. If the system doesn't have enough time to transfer the data to the disk, then performance suffers as soon as the cache fills.
In these cases a sufficiently large solid-state disk can store an entire day's transactions and send the information to disk. A good example is an application tracking
Of course you could do the same thing by keeping the transactions in RAM. However, solid-state disks have elaborate power protection and other features to keep from losing data, making them more secure than conventional RAM.
Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years he has been a freelance writer specializing in storage and other computer issues.
This was first published in December 2000