RAID and LVM, Part Two

Learn how to implement a RAID configuration.
Last month’s column was the first of a three-part series on Linux RAID and LVM configuration. Before proceeding with this month’s column, you should review last month’s article (available online at http://www.linux-mag.com/2006-02/guru_01.html) to re-familiarize yourself with the basic concepts of RAID and LVM. This month, let’s pick up the topic again with a look at configuring RAID using the mdadm package. Next month continues with LVM configuration and use.

Reviewing the Initial Setup

To begin, review your initial setup. You should have a kernel configured to support RAID (and LVM, if you plan to use it), and you should have installed the mdadm package from your distribution or from the mdadm web site, http://www.cse.unsw.edu.au/~neilb/source/mdadm/. You should also have set aside partitions on two or more physical hard disks. Ideally, your RAID partitions should be identical in overall size (or as close as you can manage) and the partitions should be on disks that are similar in speed. If you’re using ATA disks, try to place each disk on a separate channel.
If you plan to convert an existing Linux installation to use RAID, you must either back up your system or prepare a new set of disks to use RAID and prepare to transfer your existing installation to this new disk set. In the former case, be sure that your emergency system is configured with a RAID-enabled kernel and has the mdadm package installed on it.
When preparing partitions to be used as part of a RAID array, give them a partition type code of 0xFD. Linux’s fdisk can do this with its t command. If partitions have another type, the Linux RAID tools won’t automatically identify them.
Before proceeding further, be sure you want to implement a RAID configuration. RAID provides speed or reliability benefits. LVM is the tool to use if you want to benefit from increased flexibility in partition sizing (and resizing). LVM also provides a basic form of disk striping, which is similar to RAID 0. Thus, you might be able to use nothing but LVM if you wanted to use LVM in conjunction with RAID 0. If you have just one physical disk, don’t bother with RAID, as it provides no advantages unless you’ve got at least two disks. LVM, which will be described next month, can be useful with just one disk, though.
If you have a RAID disk controller that manages the RAID tasks itself, this month’s column won’t be of much use to you; you must configure and use your RAID array using the drivers and tools provided with the controller (or contained in the Linux kernel for your controller). As noted last month, though, many so-called RAID controllers rely on Windows drivers that are similar in principle to Linux’s RAID drivers; you should treat these controllers like ordinary disk controllers and use Linux’s RAID support. (If this is the case, you may not be able to use RAID for both Linux and Windows with these controllers, because the two RAID driver schemes are different and may be incompatible.)

Principles of the mdadm Tool

The mdadm package is dominated by a program called, appropriately enough, mdadm. (The characters” md” stand for multiple device, and” adm” is short for administration;” mdadm”, therefore, is the multiple device administration tool.) You can use mdadm to perform several RAID preparation and maintenance tasks, including:
*Create a new array
*Assemble the components of an already-created array into an active array
*Build an array from old-style configurations (those used by the older raidtools package)
*Add or delete drives from an existing array
*Monitor the status of an array
*Change the size or configuration of an array
Describing every detail of mdadm’ s options is beyond the scope of this column, so you should consult its man page if you need more information. This column focuses on creating and using a new RAID array.
The basic syntax for madam is:
mdadm [mode] <raiddevice> [options]
<component-devices>
The mode is one of mdadm’ s modes of operation. The operations described here all use the ––create (or –C for short) mode.
The raiddevice is a RAID device filename. These device filenames all take the form /dev/mdx, where x is a number from 0 up. For instance, /dev/md0 is the traditional first RAID device.
The options vary depending on what you want to do. A few of the more important ones are described shortly.
The component-devices are the device filenames that go into the RAID device. For instance, /dev/hda7 and /dev/hdc5 might make up a single RAID device, /dev/md0.
Be sure not to confuse the raiddevice and the component-devices. The former is the RAID device filename, which is not linked to anything until you create a RAID array. Also, specify just one RAID device filename per mdadm command line.
The component-devices are the underlying Linux disk partitions that you want to link together to create a single RAID device. When creating a RAID array, you specify multiple Linux partitions on separate physical disks as the component-devices.

Preparing RAID Devices

RAID configuration is perhaps best described via an example. Suppose you want to create a RAID 0 (striping, for improved performance) array from /dev/hda7 and /dev/hdc5. The command to do this looks like this:
# mdadm ––create /dev/md0 ––level=0 \
––raid-devices=2 /dev/hda7 /dev/hdc5
mdadm: /dev/hdc5 appears to contain an
ext2fs file system
size=6843656K mtime=Mon Nov 7
17:20:01 2005
Continue creating array? y
mdadm: array /dev/md0 started.
This example uses the ––create option, passes the filename of the RAID device being created (/dev/md0), tells which devices will make up the new RAID device (/dev/hda7 and /dev/hdc5), and passes two other options to mdadm:
*––level=0 (which can be shortened to –l0), specifies the RAID level being created. Levels of 0, 1, 4, 5, and 6 are the most commonly used, as described in last month’s column.
*––raid-devices=2 tells mdadm how many disks you intend to use in your RAID array. Although this information is implicit in the number of component device filenames you provide, specifying it is a useful safety measure: If you omit a device filename or add a stray one, mdadm complains. When specifying a RAID level that supports error detection and correction, the total number of devices equals the number of RAID devices plus the number of spare devices.
Using ––create effectively “glues” two or more Linux partitions together via the RAID device filename you specify. Thereafter, Linux can access the combined device via the RAID device filename.
Proper identification of the component partitions (via their 0xFD partition type codes) is important. With RAID support compiled into your kernel, Linux checks these partitions for information written to them by mdadm. If the kernel finds the appropriate information, it automatically links partitions together for access via the /dev/md… device files.
In most cases, you should not attempt to access the underlying Linux devices (such as /dev/hda7 and /dev/hdc5) directly. A partial exception is if you use RAID 1 (mirrroring). In this configuration, both underlying devices contain precisely the same information. This feature is useful for certain limited situations, such as when booting the computer. Even in this case, though, you should not attempt to write to either underlying partition except through the RAID device.
The preceding example shows one safety feature of mdadm: It has noticed and commented upon the fact that /dev/hdc5 contains an existing ext2 filesystem. This example intentionally overwrites that filesystem.
Depending on the RAID level and features you want to use, several other options may be important to you:
*If your RAID device files (/dev/md0, /dev/md1, and so on) don’t exist, you can pass the ––auto (or –a) option to ––create mode to have mdadm create the necessary device file automatically. This option is seldom necessary because most distributions provide the RAID device files in their default configurations.
*––spare–devices= (or –x) specifies the number of “spare” devices in the array. Spare devices are backups that may be automatically activated in the event of a failure of a primary device. This number is only meaningful for RAID levels that support error detection and correction, such as RAID 1, 4, 5, or 6.
*––remove (or –r) is used with the ––manage (–M) mode. It removes the specified device from the array. This option should only be used when a device has failed. You can then physically replace the faulty drive and activate the replacement.
*You can force Linux to treat a device as failed by using ––set–faulty (––fail or –f) with the ––manage mode.

Querying RAID Devices

The ––query (–Q), ––detail (–D), and ––examine (–E) options to mdadm all display information on the RAID configuration. Some of these options have slightly different effects depending on the device filenames they’re given.
The ––query options can take either the RAID device filename (such as /dev/md0) or the filename for a component partition (such as /dev/hda7), but ––detail works only on RAID device files and ––examine works only on component partition filenames.
The ––query option provides basic device information, such as the number of devices, the RAID level, and (when you pass the RAID device filename) the size of the RAID array. The ––detail and ––examine options provide more detailed information, including the number of active, working, failed, and spare devices in the array and the identities of all the constituent devices.
All of these options can be handy if you’ve forgotten which partitions map together into a RAID array. To minimize confusion, try to map partitions to others with identical partition numbers, such as /dev/hda2 to /dev/hdc2.

Using RAID Devices Directly

If you want to use RAID for speed or reliability improvements and aren’t interested in the flexibility benefits of LVM, you can begin using your RAID array as soon as you’ve defined it. Each RAID array, as created with an mdadm–create command, is assigned a single RAID device filename and can be used as you would use a regular Linux partition. Thus, if you want to use RAID for three Linux filesystems (say, /usr, /home, and /opt, you must create three partitions on each of your RAID disks and run mdadm three times to create three RAID devices (/dev/md0, /dev/md1, and /dev/md2.
Once your RAID devices are created, you can treat them as if they were regular Linux partitions: Use mkfs to create filesystems on them, copy files to them, and so on. If you want to access these RAID devices regularly, you must also modify your /etc/fstab file to reference the RAID device filenames rather than any constituent partitions.

Booting from RAID Devices

One important limitation of RAID is that booting a kernel from a RAID device is not always possible. RAID levels that spread data across multiple partitions are almost certain to break up the kernel across those partitions, as well as the filesystem data structures required to locate and access the kernel. Thus, unless a boot loader understands Linux RAID, you can’t boot from a kernel stored on a RAID device that interleaves its data.
RAID 1 (mirroring) is an exception to this rule. Mirrored partitions are exact copies of one another and each holds the entire contents of the partition. Thus, it’s possible for a boot loader to read a kernel if it’s stored on a RAID 1 device. You can use this fact if you’re configuring two identical disks for RAID: Split your partitions up in whatever way you see fit and configure most of them for RAID 0, RAID 4/5, or RAID 6. Set up a separate set of partitions for /boot, though, and configure them using RAID 1 rather than any other RAID level.
For instance, you might configure /dev/hda1 and /dev/hdc1 as RAID 1 devices, to be referred to as /dev/md0 in Linux, such as in Linux’s /etc/fstab file. When you configure your boot loader, have it refer to any of the underlying partitions (/dev/hda1 or /dev/hdc1) rather than to the Linux RAID device (/dev/md0).
Linux should have support for the RAID level used by its root (/) filesystem built into the kernel. If you built this support as a module, Linux won’t be able to read the /lib/modules directory tree to find the relevant driver modules, so the kernel will be unable to mount the root filesystem. If you really want to build all the RAID support as modules, you can do so, but you’ll also need to create an initial RAM disk, store it in /boot, and have your boot loader pass it to the kernel.
Keeping the most critical parts of your Linux installation outside of a RAID array enables you to boot the computer even if you have problems with your RAID configuration. If you want this extra safety net, create separate partitions (to be merged into RAID devices) for /usr, /home, and other big directories off of your root (/) directory, leaving the root directory itself, /etc, /bin, /sbin, /lib, /root, and any other really critical directories on your system out of the RAID configuration.

Next Month

Next month’s column looks at LVM. Unlike RAID, LVM can be beneficial even on single-disk systems. You can also combine RAID and LVM to gain speed, reliability, and flexibility benefits all at once.

Roderick W. Smith is the author or co-author of over a dozen books, including Linux in a Windows World and Linux Power Tools. He can be reached at class="emailaddress">rodsmith@rodsbooks.com.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62