Achieving Fault Tolerance Using RAID TechnologyData storage, integrity, and availability are critical concerns in enterprise network environments. A Single Large Expensive Drive (SLED) running in a network server is the network's single point of failure; if the hard drive crashes then the network crashes. Dependence on hard disk storage combined with the volatility of every hard disk's Mean Time between Failure (MTBF) has led to the widespread adoption of Redundant Arrays of Independent Disks (RAID) technology. RAID enables enterprises to take advantage of the fault tolerant data redundancy, improved availability and performance, and disk drive failover capability designed into disk array subsystems to maintain data integrity and availability before, during and after a hard disk failure.
The Birth of RAIDThe RAID acronym was introduced in a 1988 paper by University of California at Berkeley scientists David Patterson, Garth Gibson, and Randy Katz proposing strategies to use an array of inexpensive disk drives that appear as a single logical disk to the host operating system to simplify disk management, boost disk Input/Output (I/O) performance, lower latency (the time it takes for a packet of data to get from one designated point to another), maintain data integrity, and maximize fault tolerance to recover from hard disk crashes beyond the capabilities of a SLED.
The Redundant Array of Inexpensive Disks strategy has dominated the fault tolerant storage industry ever since. As the price of disk drives diminished, the RAID acronym later became Redundant Array of Independent Disks. RAID controllers, the hardware component that creates a RAID array from Just a Bunch of Disks (JBOD) sharing a common bus, continue to be a focal point of product development (Leider, 2001).
Hard disk redundancy is the fault tolerance objective of implementing RAID technology in network servers as a form of online data backup in...