dcsimg

Software RAID on Linux with mdadm

Now that we've completed our initial examination of the basics of RAID levels (including Nested RAID) it's time to turn our attention to RAID functionality on Linux using software. In this article we will be discussing mdadm -- the software RAID administration tool for Linux. It comes with virtually every Linux distribution and has some unique features that many hardware RAID cards don't.

As you can see, mdadm gives you several pretty good options for monitoring your arrays. The exact practice of how you monitor really depends upon how you want to function and any tools or processes you have developed. There are several articles on the web that show how to monitor your mdadm array.

In the interest of helping interested people get started I will present a few tips that you can use to get started with monitoring mdadm arrays. I first recommend configuring mdadm to email you in the event of a change in state of the array. There are several articles that discuss how to do this. For example, this blog shows you how to use mdadm to send email to you in the event of a problem.

A second posting that shows you how to use mdadm to email you in the event of a problem is here. Look at entry #7 (kevlaur) that contains some pretty good instructions on setting up alert notifications. What is interesting about this posting is that the author uses a “program” to send the email rather than rely on mdadm. This gives you some flexibility and allows you to customize things (personally I like the approach but I would also have mdadm send an email as well – just to be sure).

If you want to be more involved in the monitoring of your arrays there are two primary ways to get more detail: (1) cat /proc/mdstat, and (2) mdadm --detail [device]. The first option gives you a quick overview of the status of the array(s). You can parse this output (perhaps with perl or python), and create a special log or send the output to “syslog” to allow processing by syslog tools. Alternatively, you could create a simple monitoring metric that can be used in conjunction with ganglia or something similar.

The second option gives you more detail than mdstat but again, you can parse this output and then perform some action (logs, syslog, ganglia, etc.). But the details are really up to you since you likely have specific processes or techniques for monitoring.

One last option is to use Munin which is a monitoring tool (somewhat similar to Ganglia). It has plugins that allow you to monitor your mdadm created RAID arrays.

Building an mdadm RAID array

Another mode of operation in mdadm is building an array but without having per-device superblocks. This means that you cannot have mdadm scan and assemble the devices into an array. Consequently, you have to be careful that you differentiate between the initial creation of the array and the assembly of the array (or you can lose data). In addition, if you built an array using build any checks that might have been done between devices are not performed. Basically, you have to be very careful when using this mode of operation and know exactly what you are doing.

I really don’t recommend using this mode of operation so I won’t be discussing it. If you are interested in using the “build” then you can read the manpage or you can search around the Internet for references.

Growing an mdadm RAID array

One of the secret weapons you get with mdadm is the ability to grow, reshape, or even change RAID levels with md arrays. There are some limitations for growing and/or reshaping arrays however, just having that ability is a pretty major accomplishment for mdadm.

The basic option for growing or reshaping md arrays is “-G” or “–grow”. There are a number of options that can be used and things can become complex fairly quickly so I won’t go over them in this introductory article. Please read the manpages for the options or you can Google for information about growing md arrays. However, let’s take a high-level look at what the “–grow” option can do.

We can use the “–grow” option to add a third disk to a two-disk RAID-1 configuration or add another disk to our existing RAID-5 configuration. But we can also use the “grow” option to change RAID levels. For example, we can convert a two-disk RAID-1 md array to a two-disk RAID-5 md array. Direct from the mdadm author’s blog is the list of what changes (reshape) can be done:


  • A RAID-1 array can change the number of devices or change the size of individual devices. A 2 drive RAID-1 can be converted to a 2 drive RAID-5.
  • A RAID-4 can change the number of devices or the size of individual devices. It cannot be converted to RAID-5 yet (though that should be trivial to implement).
  • A RAID-5 can change the number of devices, the size of the individual devices, the chunk size and the layout. A 2 drive RAID-5 can be converted to RAID-1, and a 3 or more drive RAID-5 can be converted to RAID-6.
  • A RAID-6 can change the number of devices, the size of the individual devices, the chunk size and the layout. And RAID-6 can be converted to RAID-5 by first changing the layout to be similar to RAID-5, then changing the level.
  • A LINEAR array can have a device added to it which will simply increase its size.
  • RAID-10 and RAID-0: These arrays cannot be reshaped at all at present.

So you can see that there is a great deal of flexibility in mdadm with respect to changing shape, RAID levels, adding devices, removing devices, etc., to md arrays.

As I mentioned previously, the details of reshaping and changing RAID levels can get complicated. I suggest that before you use this option, you read all the literature you can. Then I would ask some experts who can tell you if your commands are correct or not. And before starting anything, I would definitely make sure you have a backup copy of the data and make sure the storage array isn’t in production.

Managing a md RAID array

Managing a md RAID array primarily consists of managing the devices within the arrays. This can include adding, removing, or failing disks within an array.

An easy (sort of) way to tell if you are using the manage mode, is that if you give a device before any options on the mdadm command line, or if the first option is “–add”, “–fail”, or “–remove”, then you are using the manage mode of mdadm. The general form of the mdadm command is,

mdadm device options ... devices ...


The options used in manage mode are:


    -a, –add [device] This option allows you to add the specified devices to the specified array while the array is running.


    –re-add [device] This options allows you to re-add a device to an array that was recently removed from the array.


    -r, –remove [device] This option allows you to remove the specified device. But the device must not be active so it must be either failed or a spare device.


    -f, –fail [device] This option marks the specified device as faulty (failed).


    –set-faulty This option is the same as -f.

You can combine the options in one command but all of the commands must affect the same array.

A simple example of the manage mode of mdadm is,

% /sbin/mdadm /dev/md1 --add /dev/sdc1 --fail /dev/sdb1 --remove /dev/sdb1


In this example, the array, /dev/md1 is the “target” of the mdadm command. The first option, “–add /dev/sdc1″ adds a device (/dev/sdc1) to the array. Then the option “–fail /dev/sdb1″ fails that device telling the array that the device is faulty. Finally, the third option, “–remove /dev/sdb1″ removes the device from the array at which point it can be removed from the system. Note that you need to first fail the device you intend to remove.

One cool feature of mdadm is that if you remove a device (disk) from an array you can add it back (–re-add) to the array and mdadm will only update the changed blocks from when the disk was removed. This is the default behavior if you used superblocks in the creation of the array (i.e. you didn’t use the “build” mode).

Misc

This last mode of operation is sort of a catch-all for options and commands that don’t fit into the other six modes. In general this mode supports some operations on active arrays, operations on component devices, and the gathering of information about the arrays.

The general form of the mdadm command in “misc” mode is the following.

mdadm options ... devices ...


Notice that for a “misc” command, no array was defined before the options. The options for this mode are the following:


    -Q, –query This option examines a device to see if it is an md device and if it is a component of an md array. The information discovered by mdadm is presented in the output.


    -D, –detail [md-device] This option prints out details of one or more md arrays. If you add the options “–brief” or “–scan” the amount of detail in the output is reduced but the format is amenable for /etc/mdadm.conf (an optional configuration file for mdadm).


    -E, –examine This option prints the content of the md superblock on the device (or all devices if no device is specified). As with the “–detail” option, if you use “–brief” or “–scan” the amount of output is reduced and it is more amenable to /etc/mdadm.conf.


    -X, –examine-bitmap This option reports the information about a bitmap file.


    -R, –run This option will start (activate) a partially built md array.


    -S, –stop This option stops (deactivates) an active md array.


    -o, –readonly This option marks the active array as read-only if it is not being currently used.


    -w, –readwrite This option marks the array as readwrite.


    –zero-superblock This option over-writes a valid md superblock in an array. This is useful when using disks (devices) from an old md array are used in a new one.


    -t, –test This option, when used with the –detail option, sets the exit status of mdadm to reflect the status of md device. This can be very useful when scripting monitoring tools. It is also useful if you want start the md array yourself rather than rely on the kernel to autostart it.

Of all of the “misc mode” operations the “–query”, “–detail”, and “–examine” options are the most generally used.

Summary

We’ve spent some time talking about RAID configurations, both single-level and Nested RAID configurations. In all of the discussions we’ve just mentioned that the RAID operations are handled by a “RAID controller”. There are two types of RAID controllers – hardware and software. The hardware RAID controller has a dedicated processor on an add-in card that handles all the RAID computations. In contrast, software RAID uses the system CPU for RAID computations (this includes “fakeRAID” controllers as well).

This article is an introduction to mdadm, the management/admin tool for Linux software RAID that comes with virtually every Linux distribution. The tool is very flexible allowing for the standard RAID levels and Nested RAID configurations, including some specialized RAID-10 configurations we’ve discussed previously. You can even use it to build some “Triple Lindy” Nested-RAID configurations if you want.

Mdadm has seven different “modes” of operation which we discussed. These modes allow you to create and start a RAID array, assemble a RAID array (useful when the system boots), follow or monitor a RAID array, build a RAID array (basically doing everything by hand – not recommended), grow a RAID array (one of the secret weapons of mdadm), manage a RAID array, and a “miscellaneous” category for functions that you may need that didn’t fall into the other categories.

There are some really great features in mdadm that can easily be glossed over in a mad rush to build a RAID configuration. There are two big ones that I want to highlight. The first features is the set of standard monitoring tools in mdadm that give you a great starting place for watching the status of your md array including the ability to send out email alerts. Plus you can write fairly simple scripts to parse array status information which can be used in monitoring tools such as ganglia or munin.

The second feature, which is probably the most significant, is the ability to grow and reshape md arrays. This allows you to add devices to an existing RAID array and grow the array to include the added space. However, the really cool feature is that you can use mdadm to change RAID levels without losing data. For example, you could convert a two-disk RAID-1 into a two-disk RAID-5 configuration. Then you could add disks to grow the RAID-5 configuration. Then you could convert the RAID-5 into a RAID-6 configuration. While I haven’t used this feature, this is pretty nifty if you ask me.

Mdadm is a great tool for Linux that is easy to use and gives you a great deal of control over RAID arrays. If you are thinking about RAID on Linux, be sure to take a look at mdadm.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62