RAID (Redundant Array of Independent Disks) disk arrays were originally designed not only to increase the capacity of disk systems (by joining multiple drives), but primarily to increase their reliability and resistance to errors.
Though the principle works in general, the disk array experiences certain restrictions and imperfections just as any other system, primarily resulting from more complicated servicing components (controller) and software (drivers, firmware). Disk array errors are usually caused by similar factors as many other electronic components experience in today’s hurry-up era – programming inconsistency, errors in design, manufactured defects and not conforming to the quality of material, parts and work. As often is the case, disk arrays, which should protect the data repository against hardware errors and other failures, collapse themselves due to an internal failure and results in exactly what disk arrays should prevent – data loss.
The disk array itself is a rather complicated mechanism. Although literature usually only mentions the basic RAID “levels,” there are numerous levels with different types, configurations, versions and combinations of various parameters. In practice, we must consider a number of factors and information during the data reconstruction process in order to guarantee a consistent and accurate result.
The following is an overview of standard RAID levels. Setting the RAID level presents a critical moment in both its formation and possible data rescue.
RAID 0 (stripe, striping, stripe set)
The 0 represents that it does not provide the basic array disk feature called redundancy. Redundancy is the array’s resistance to hard disk failures. Supplementary information, which enables the reconstruction of contents from one disk to another (new) during a failure, is stored onto the disk(s).
During the striping process, data is spread evenly across several disks. High speeds are reached (thanks to parallel access), though we do not have redundant information to recover data lost during a failure. It is necessary to recover data from each defective disk before performing the reconstruction process.
RAID 1 (mirror, mirroring)
The most basic RAID level, where data is stored in parallel on more (typically two) hard disks. If one disk experiences a failure, the data area should be stored identically on the other disk.
The mirror is resistant to failure of N-1 disks. If all disks are impacted (e.g. during a power surge), data must be recovered from at least one of the damaged disks before the array reconstruction process can begin.
RAID 2, 3, 4, 5, 5e, 5ee, 6 … (striping with parity / ECC)
As is the case in other variants mentioned, today’s most frequently used RAID 5 level benefits from RAID 0 and dissociates data on several disks, achieving greater data carrying capacity, though supplementary information is also saved to the disks. This information enables reconstructing the contents of the disk onto another (new) hard disk, which replaces the damaged disk.
These systems are usually resistant to one disk failure; thus in case of a failure of more than one drive they require full data recovery on each of the additional defective disks before performing the array reconstruction process.
RAID 0+1, 10 … (combined)
Manufacturers are trying to utilize the benefits of both variants (speed, redundancy) by combining various RAID levels. Although this configuration is more difficult to operate, in practice, data recovery from these systems practically does not increase the difficulty and therefore the price for array reconstruction is not affected.
Hardware RAID, Software RAID, Pseudo-hardware RAID
A “classic” example of a disk array is several hard disks connected to a special “RAID” controller, which takes care of all operations related to disk array operations and is also responsible for data logistics within the array. These so called “hardware” controllers are usually fast and do not load the system, but can be more expensive. Disk arrays were therefore introduced, which are operated at operating system level, so called “software RAID.” Their complicated data recovery presents a disadvantage when array fails – the array requires constant support from the operating system, which is usually stored on the damaged array itself and therefore is not functional after the failure. “Pseudo-hardware RAID” controllers are very popular nowadays. These controllers operate like real RAID controllers, though they are only disk interfaces (controllers) and all operating functions of the disk array are taken over by their respective drivers. These are usually very cheap and unfortunately very faulty.
In principle, a disk array can be build from any type of storage media. Nevertheless, in practice we of course come across hard disks almost exclusively. Considering our comprehensive support for various types of media and original approach towards data reconstruction, we can safely say that data recovery is possible for almost any combination of disk array parameters, regardless of the media, operating or file system used.
To make the picture complete, the following is a list of media and systems, which you can come across when working with disk arrays. If you cannot find “your” system in the list, don’t worry, the list is just for illustration – data recovery is almost certain even in your case!
Connection method
IDE (ATA), SATA, SCSI, FC-AL, SAS, iSCSI, SAN, NAS…
Operating systems
Windows / Linux / UNIX / MacOS / Novell / other
File systems
FAT (FAT12 / FAT16 / FAT32), NTFS (incl. NTFS 5 and EFS), EXT (EXT, EXT2, EXT3), ReiserFS, JFS, XFS, UFS, Apple HFS and HFS+, Novell NWFS, Novell NSS, HPFS and others
Data rescue from damaged disk arrays is possible for virtually any type of defect:
- electronic / mechanical failure of one, more than one, or all disks (e.g. PCB, heads, damaged plate or disk mechanics - motor, bearings …)
- faulty controller function (configuration loss, faulty writes …)
- incorrectly calculated parity
- (partial) disk array rewrite, including system reinstallation
We specialize in complicated defects, such as multiple parity recalculations, even with the configuration changed and the array partially initialized.