Introduction to RAID

RAID is one of those technologies that has really revolutionized storage. In this article, we'll review the six most common single RAID levels and describe how each works and what issues surround them.

In this illustration when block A1 is written to disk 0, the same block is also written to disk 1. Since the disks are independent of one another, the write to disk 0 and the write to disk 1 can happen at the same time. However, when the data is read, the RAID controller can read block A1 from disk 0 and block A2 from disk 1 at the same time since the disks are independent. So overall, the write performance of a RAID-1 array is the same as a single disk, and the read performance is actually faster from a RAID-1 array relative to a single disk.

The strength of RAID-1 lies in the fact that disks contains copies of the data. So if you lose disk 0, the exact same data is also on disk 1. This greatly improves data reliability or availability.

The capacity of RAID-1 is the following:

Capacity = min(disk sizes)

meaning that the capacity of RAID-1 is limited by the smallest disk (you can use different size drives in RAID-1). For example, if you have a 500GB disk and a 400GB disk, then the maximum capacity would be 400GB (i.e. 400GB of the 500GB drive is used as a mirror, and the remaining 100GB is not used). RAID-1 has the lowest capacity utilization of any RAID configuration.

The reliability or probability of failure is also described in wikipedia. Since the disks are mirrors of one another but still independent, the probability of having both disks fail, leading to data lose, is the following:

P(dual failure) = P(single drive)2

So the probability of failure of a RAID-1 configuration is the square of the failure probability of a single drive. Since the probability of failure of a single drive is less than 1, that means that the failure of a RAID-1 array is even smaller than the probability of failure of a single drive. The reference has a more extensive discussion about the probability of failure but in general, the probably is fairly low.

One might be tempted to use RAID-1 for storing important data in place of backups of the data. While RAID-1 improves data reliability or availability, it does not replace backups. If the RAID controller fails, or if the unit containing the RAID-1 array suffers some sort of failure, then the data is not available and may even be lost. Without a backup you don’t have a copy of your data anymore. However, if you make a backup of the data, you would have a copy. The moral of the tale is – make real backups and don’t rely on RAID-1.

Table 2 below is a quick summary of RAID-1 with a few highlights.

Table 2 – RAID-1 Highlights

Raid Level Pros Cons Storage Efficiency Minimum Number of disks

  • Great data redundancy/availability
  • Great MTTF

  • Worst capacity utilization of single RAID levels
  • Good read performance, limited write performance

50% assuming two drives of the same size 2

This RAID level was one of the original five defined, but it is no longer really used. The basic concept is that RAID-2 stripes data at the bit level instead of the block level (remember that RAID-0 stripes at the block level) and uses a Hamming Coding for parity computations. In RAID-2, the first bit is written on the first drive, the second bit is written on the second drive, and so on. Then a Hamming-code parity is computed and either stored on the disks or on a separate disk. With this approach you can get very high data throughput rates since the data is striped across several drives, but you also lose a little performance because you have to compute the parity and store it.

A cool feature of RAID-2 is that it can compute single bit errors and recover from them. This prevents data errors or what some people call “bit rot”. For an overall evaluation of RAID-2, there is this link.

According to this article hard drives added error correction that used Hamming codes, so using them at the RAID level became redundant so people stopped using RAID-2.

RAID-3 uses data striping at the byte level and also adds parity computations and stores them on a dedicated parity disk. Figure 3 from wikipedia (image by Cburnett) illustrates how the data is written to four disks in RAID-3.

Figure 3: RAID-3 layout (from Cburnett at wikipedia under the GFDL license)

This RAID-3 layout uses 4 disks and stripes data across three of them and uses the fourth disk for storing parity information. So a chunk of data “A” has byte A1 written to disk 0, byte A2 is written to disk 1, and byte A3 written to disk 3. Then the parity of bytes A1, A2, and A3 is computed (this is labeled as Ap(1-3) in Figure 3) and written to disk 3. The process then repeats until the entire chunk of data “A” is written. Notice that the minimum number of disks you can have in RAID-3 is three (you need 2 data disks and a third disk to store the parity).

RAID-3 is also capable of very high performance while the addition of parity gives back some data reliability and availability compared to a pure striping model ala’ RAID-0. Since the number of disks in a stripe is likely to be smaller than a block all of the disks in a byte-level stripe are accessed at the same time improving read and write performance. However, the RAID-3 configuration some possible side effects.

In particular, this link explains that RAID-3 cannot accommodate multiple requests at the same time. This results from the fact that a block of data will be spread across all members of the RAID-3 group (minus the parity disk) and the data has to reside in the same location on each drive. This means that the disks (spindles) have to be accessed at the same time, using the same stripe, which usually means that the spindles have to be synchronized. As a consequence, if an I/O request for data chunk A comes into the array (see Figure 3), all of the disks have to seek to the beginning of the chunk A and read their specific bytes and send it back to the RAID-3 controller. Any other data request, such as that for a data chunk labeled B in Figure 3 is blocked until the request for “A” has completed because all of the drives are being used.

The capacity of RAID-3 is the following:

Capacity = min(disk sizes) * (n-1)

meaning that the capacity of RAID-3 is limited by the smallest disk (you can use different size drives in RAID-3) multiplied by the number of drives n, minus one. The “minus one” part is because of the dedicated parity drive.

RAID-3 has some good performance since it is similar to RAID-0 (striping), but you have to assume some reduction in performance because of the parity computations (this is done by the RAID controller). However, if you lose the parity disk you will not lose data (the data remains on the other disks). If you lose a data disk, you still have the parity disk so you can recover data. So RAID-3 offers more data availability and reliability than RAID-0 but with some reduction in performance because of the parity computations and I/O. More discussion about the performance of RAID-3 is contained at this link.

RAID-3 isn’t very popular in the real-world but from time to time you do see it used. RAID-3 is used in situations where RAID-0 is totally unacceptable because there is not enough data redundancy and the data throughput reduction due to the data parity computations is acceptable.

Table 3 below is a quick summary of RAID-3 with a few highlights.

Table 3 – RAID-3 Highlights

Raid Level Pros Cons Storage Efficiency Minimum Number of disks

  • Good data redundancy/availability (can tolerate the lose of 1 drive)
  • Good read performance since all of the drives are read at the same time
  • Reasonable write performance but parity computations cause some reduction in performance
  • Can lose one drive without losing data

  • Spindles have to be synchronized
  • Data access can be blocked because all drives are accessed at the same time for read or write

(n - 1) / n where n is the number of drives 3 (have to be identical)

RAID-3 improved data redundancy by adding a parity disk to add some reliability. In a similar fashion, RAID-4 builds on RAID-0 by adding a parity disk to block-level striping. Since the striping is now down to a block level, each disk can be accessed independently to read or write data allowing multiple data access to happen at the same time. Figure 4 below from wikipedia (image by Cburnett) illustrates how the data is written to four disks in RAID-4.

Figure 4: RAID-4 layout (from Cburnett at Wikipedia under the GFDL license)

Comments on "Introduction to RAID"


You have typos. Consider:

$article_text =~ s/RIAD/RAID/g;

Also, you did not explain how parity works, which is something that confuses RAID newcomers. They need to fathom the concept that the parity is combined with the surrounding data to compute what the original data was so that it can be recreated.


it’s worth noting that MD provides non-nested raid10: it’s a single raid level that merely provides replicas of blocks (on multiple disks, of course.) with 2 disks, it’s the same as raid1, but can still provide raid1-level redundancy with 3 or more disks. more disks give you a raid0-like increase in bandwidth and/or throughput. it also lets you choose replication of more than 2x.

but in general, I think people are gradually realizing that block-level raid is eventually going to become obsolete. there are a lot of advantages to letting a smart filesystem manage redundancy, since that permits file/access-aware choices, and can mitigate some of the issues of block-level raid rebuilds.


Thanks for such a structured article on RAID..


RAID3 and RAID4 are considered to be “two of the most common RAID levels” by whom, exactly?

And not a peep about how any of these relate to linux in an article in something calling itself linuxmag?

Not even a mention of mdraid?


Nice article on RAID, but it would be nice if it covers about where will the RAID controller exist (in BIOS or Kernel or separate controller). And also, where will the RAID controller store the meta data?


Thanks everyone for the comments. Just to clarify a bit:

@roustabout: RAID-3 and RAID-4 were part of the original RAID definition. I didn’t see what I called them “two of the most common RAID levels”. If I did, the intent is to point out that they are part of the original RAID definitions, but commonly _used_.

For everyone who is concerned that I haven’t talked about mdadm or software RAID, hardware RAID, or “fakeRAID” – that article is coming (as are articles about Nested RAID). This is a whole series of introductory articles on RAID. Talking about specific implementations, particularly for Linux, is coming. You just have to be patient. So @roustabout – you will just have to be patient :)

@markhahn – great comment and I totally agree with you but I also disagree to some extent. Putting RAID functionality into the file system _should_ allow the file system to do really useful things such as only recover the needed blocks during a disk failure. This avoids having to read all of the blocks for recovery and perhaps coming close to the dreaded URE limit.

But this means that we (the community) needs to rewrite all file systems to do this. With each file system being unique this means we are going to have different sets of code that do pretty much the same thing. I don’t think existing file systems will do this (too much work and too disruptive) so that means future file systems should incorporate this (such as btrfs). However, it takes a very long time for a file system to mature so we may be waiting for several years. So in the meantime, I think block-based RAID is here to stay.

On the other hand, I think the development of object based file systems that don’t use block-based RAID, should be the wave of the future. PanFS from Panasas is an example of this. I think local file systems should adopt this approach (and we’re seeing some of this with ExoFS) because we don’t need to read all the blocks to recover from a disk failure – just the objects that are “missing” or need to be duplicated.

Thanks for bringing up the topic – always good to think about what we need to do next few years.



A great beginning article on RAID and I look forward to more on this topic. You eluded to current disk drives doing their own parity checking/correction, I’d like to see that explored more; just how much onboard data checking do they do? I’ve heard that modern high-density drives generate huge numbers of errors from the raw disk which must be corrected in the onboard electronics of the drive, but I’ve never seen anything definitive about this topic.


I just wanted to make an observation, you start the third page by saying “In this layout, data is written in block stripes to the first three disks (disks 0, 1, and 2) while the third drive (disk 3)” and I think what you meant to say is “…while the fourth drive (disk 3)” since your array starts at disk 0.


Nice article for someone who does not know anything about RAID (like me ) and want to know the basic definition or the general idea.


Good and detailed article


Nice article to clear idea of Raid to the newcomers.


“In the real-world, RAID-4 is rarely used because RAID-5 (see next sub-section) has replaced it.”
Note that Netapp storage, which is most popular today, uses Raid 4 and Raid dp only. Raid dp is kind of raid 4 + one more parity disk


I constantly spent my half an hour to read this webpage’s content all the time along with a cup of coffee.


Mobile phones have undergone so many facilities and the new generation cell phones have almost all functionalities of a personal computer. The high end mobile phones with advanced features are known as Smart phones. They are highly efficient in performing multiple functions and is a combination of gadgets rolled into one namely a camera, computer, calendar, TV etc.

But if you want to buy this or any other smartphones,you should compare the popular smartphones and choose the one which suits your needs best.I found an article on 10 Best Smartphone Reviews at :-


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>