RAID and LVM: Part Three

Here’s how to implement an logical volume management configuration.

This month’s column is the final installment in a three-part series on redundant array of independent disks (RAID) and logical volume management (LVM) technologies. This month’s column covers LVM setup and management. LVM enables you to easily modify your filesystem sizes as your needs change.

If you haven’t already done so, read Part One in this series (available online at http://www.linux-mag.com/2006-02/guru_01.html), which presents the basics of RAID and LVM, including kernel configuration options and software installation. If you want to use LVM in conjunction with RAID (to improve disk speed and/or reliability), you should also read Part Two (available at http://www.linux-mag.com/2006-03/guru_01.html), which describes RAID configuration.

Reviewing the Initial Setup

To implement LVM, you must add LVM support to your kernel, as described in Part One of this series. Specifically, you need to activate the “Device Mapper Support” option in the “Device Drivers& gt; Multi-device Support (RAID and LVM) ” category. You must also install the LVM software. In most distributions, this comes in the lvm2 package. Alternatively, you can download the support software in source code form using CVS. Check http://sources.redhat.com/lvm2/ for details. (This article assumes you’ll be implementing the latest Linux LVM implementation, LVM2. The earlier LVM implementation uses different kernel options and support tools, as noted in Part One of this series.)

If you wish to use LVM in conjunction with RAID, you can use the RAID tools to combine partitions from multiple disks. The result is RAID devices accessed using the /dev/md x device files. You can then add these to an LVM volume group — that is, a collection of one or more partitions or other devices that are treated as a source of space for the logical volumes, which are used in place of partitions. Ultimately, you’ll create filesystems on and mount the logical volumes you create.

You can implement LVM using just one partition or as many partitions as you like. If you know you’ll be using LVM on a new disk, you might as well create a single partition to use as a volume group. You should also create one or more partitions to hold data that shouldn’t be handled by LVM. Most importantly, the /boot partition, which typically holds the kernel, should reside outside of LVM control because Linux boot loaders can’t currently read data from within logical volumes. The simplest configuration places the root (/) filesystem outside of LVM control, too. (If you want your root filesystem on a logical volume, you must obtain or create an initial RAM disk with the LVM tools so that the kernel can mount the root filesystem. Creating a RAM disk is outside the scope of this article.) To minimize the size of your root filesystem, place large directories, such as /usr, in their own logical volumes. With LVM’s management tools, you can do so without fear of assigning them too much or too little space — just resize them if you guess wrong!

If you’re interested in combining RAID and LVM to obtain the speed benefits of RAID 0, be aware that LVM offers a simple striping mechanism that may be sufficient. Linux’s LVM2 doesn’t support anything equivalent to higher RAID levels, though, so using higher RAID levels with LVM still makes sense if you need these features.

Getting to Work

For purposes of this column, consider the following partition layout:

Partition Size ID System
/dev/hda1 64228 16 Hidden FAT16
/dev/hda2 2048287 0B W95 FAT32
/dev/hda4 7879882 05 Extended
/dev/hda5 48163 83 Linux /boot
/dev/hda6 5020281 FD Linux RAID
/dev/hda7 104391 82 Linux swap

/dev/hdc1 6144831 A5 FreeBSD
/dev/hdc2 71360730 05 Extended
/dev/hdc5 2150420 83 Linux /
/dev/hdc6 5020281 FD Linux RAID

The two partitions marked “Linux RAID” are to be combined together into a single RAID device, /dev/md0. This device will become the physical volume that constitutes an LVM volume group.

This configuration shows that LVM (and RAID) can coexist with non-Linux partitions. In fact, if you were to implement a system such as this one and then decide later that you don’t need some non-Linux partition, such as the FreeBSD partition, you could convert it to an LVM physical volume and add it to the volume group for allocation as Linux space. In this case, you’d lack RAID benefits on the data stored in this partition, but you would at least get more available disk space in Linux.

Preparing an LVM Physical Volume

Before using LVM, you must prepare space for that use. This involves two steps: Marking the partitions or RAID devices as physical volumes and merging the physical volumes into a single volume group.

To mark a partition or RAID device as a physical device, use the pvcreate command:

# pvcreate /dev/md0
Physical volume “/dev/md0″ successfully created

You can think of this command as being similar in some ways to mkfs: It writes data to the disk in preparation for using the partition or device as part of a volume group. If you want to use multiple devices in a volume group, you can either call pvcreate once for each device or pass all the device names at once on the command line, as in pvcreate /dev/md0 /dev/sdb2.

You should also mark disk partitions using the type code 0x8E. Although the Linux LVM tools can use partitions with other type codes, using 0x8E helps avoid confusion down the line. You can make this change with Linux’s fdisk and its t command. Such a change is meaningless with RAID devices, although as noted in previous parts of this series, the partitions that go into a RAID device should be marked with the type code 0xFD.

Once you’ve prepared the individual devices that go into a volume group, you can create that volume group using vgcreate:

# vgcreate my_group /dev/md0

As shown here, vgcreate takes two arguments. The first, my_group, gives a label to the volume group. This label is used as part of the identifier for logical volumes, as described shortly. The second argument, /dev/md0, is the list of devices you want to merge together into the volume group. (Additional tools enable you to modify this list later.)

In this example, a single RAID device makes up the whole volume group. If you want to use multiple devices as your volume group, you can do so by specifying them all at once, as in vgcreate my_group /dev/md0 /dev/sdb2.

Once you’ve created a volume group, you should see a new directory appear in /dev named after the volume group you’ve created. Per this example, the directory will be called /dev/my_group. You can also use the vgdisplay command to verify that the group has been successfully created. vgdisplay displays a large number of statistics on all the volume groups it finds on your system. Most of this information is highly technical, so don’t be too concerned with it for the moment; just be aware that if you see this information, rather than a message that vgdisplay couldn’t find any volume groups, your volume group has been created.

If you want to convert a partition from some other use to be part of a volume group, you can run pvcreate on the partition and then use vgextend to add the new physical volume to the group. For instance, vgextend my_group /dev/hdc1 adds /dev/hdc1 to the existing volume group my_group.

This feature can be handy if you want to convert an existing installation to use LVM: You can copy data from one filesystem to temporary space, convert the partition into a physical volume, add it to a volume group, create a new logical volume, and move the original data onto the logical volume. You can therefore convert an existing installation to use LVM in a piecemeal fashion, one partition at a time, assuming you have sufficient temporary storage space.

If you want to know what devices have been collected into a volume group, use the pvdisplay command. This command displays information on your physical volumes, including the names of the volume groups to which they belong.

Creating Logical Volumes

Once you’ve created a volume group, you can create logical volumes within it to store your filesystems. You do this one logical volume at a time using lvcreate:

# lvcreate –L200M –n usr_local my_group
Logical volume “usr_local” created

This example creates a logical volume 200 MB in size (–L200M) called usr_local (–n usr_local) within the volume group called my_group. The result is a file called /dev/mapper/my_group-usr_local, which can also be accessed through the symbolic link /dev/my_group/usr_local. (The latter is the required access method for certain tools.) Ultimately, you treat this file just as you’d treat an ordinary disk partition device file such as /dev/hda5.

The lvcreate command supports a large number of options. Among the options, –L (or ––size) is the most important and basic option, since it sets the logical volume’s size, which you specify in megabytes (with no suffix or with an M suffix), kilobytes (K), gigabytes (G), or terabytes (T). Another option that might be of interest is –i (or ––stripes), which tells lvcreate to create a logical volume that’s striped across multiple physical volumes, similar to a RAID 0 configuration. You must pass the number of stripes (that is, the number of physical volumes the logical volume should straddle) as an option.

For instance, the command…

# lvcreate –L200M –i 2 –n usr_local my_group

… creates a logical volume that’s striped across two physical volumes. You can also use –I to specify the stripe size, in kilobytes. The stripe size must be a power of 2 between 2 and 512, such as 64 or 256.

Using Logical Volumes

Once your logical volumes are created you can use them much as you’d use ordinary partitions. For instance, suppose you’ve created a new logical volume to hold the contents of /usr/local. You might create a filesystem on this logical volume:

# mkfs –t reiserfs /dev/my_group/usr_local

You could then create an entry in /etc/fstab to mount the logical volume:

/dev/my_group/usr_local /usr/local \
reiserfs defaults 1 1

When you reboot or type mount –a, the logical volume should be mounted at /usr/local. Of course, this effectively masks any data currently stored in /usr/local, so you should first move any existing data to another location. Temporarily mounting the logical volume at some other point and moving the data there works well if you want to transition your system into using LVM.

The real benefit to LVM, of course, is in its ability to modify logical volumes. Suppose you’ve created a 200 MB logical volume for /usr/local, as shown earlier, but you realize you need more space in that partition, say, 400 MB. You can easily increase the size of the logical volume with lvresize:

# lvresize –L400M /dev/my_group/usr_local
Extending logical volume usr_local to 400.00 MB
Logical volume usr_local successfully resized

This process only resizes the logical volume, though; you must still use filesystem-specific tools to resize the filesystem it contains. In the case of ReiserFS, you could use resize_reiserfs:

# resize_reiserfs /dev/my_group/usr_local
resize_reiserfs 3.6.18 (2003 www.namesys.com)
resize_reiserfs: On-line resizing finished successfully.

In the case of resize_reiserfs, the partition need not be unmounted, so this operation can be done with no disruption to your system. The resize_reiserfs program also resizes to the volume size by default, so you don’t need to specify a size when you’ve grown a logical volume.

Other filesystems have their own tools and limits. For ext2fs and ext3fs, you must unmount the filesystem before resizing it with resize2fs. (Experimental tools for resizing ext2fs and ext3fs when mounted are available, but are not yet safe for regular use.) For XFS, resizing is performed when the filesystem is mounted using xfs_growfs,, which takes the mount point as an argument, as in xfs_growfs /usr/local. For JFS, you pass the remount and resize options to mount after mounting the filesystem, as in mount –o remount,resize /usr/local.

Shrinking an existing filesystem is trickier, and isn’t possible with XFS. You must first shrink the filesystem using resize_reiserfs, resize2fs, or JFS’s resize option to mount. In all cases, you must specify the target filesystem size (consult the relevant man pages for details). You must then resize the logical volume using lvresize.

Be very cautious when shrinking with lvresize. If you specify the wrong size, you could end up with a filesystem that thinks it’s larger than the logical volume on which it resides, which causes problems.

One trick that can help make this safer is to resize the filesystem to a size that’s substantially smaller than the target and then to resize it upwards. For instance, if you’ve got 50 MB on a 400 MB filesystem and you want to resize it to 200 MB, you can resize the filesystem to 100 MB, resize the logical volume to 200 MB, and then resize the filesystem again to 200 MB (letting the resizing tool determine the exact size when growing the filesystem).

Of course, logical volume resizing works only as long as free space exists within the volume group. You can check on the volume group’s status with the vgdisplay command, which displays various information on your volume groups. It includes lines that summarize the total, allocated, and free space in your volume group:

PE Size 4.00 MB
Total PE 4155
Alloc PE / Size 3518 / 13.74 GB
Free PE / Size 637 / 2.49 GB

In this example, the volume group contains 4,155 physical extents (PEs); each PE is 4.00 MB. Of this space, 3,518 PEs (13.74 GB) are allocated and 637 PEs (2.49 GB) are free. If you lack space, you can add more disk space by converting existing partitions or adding a new physical disk, or you can shrink existing logical volumes to make room.

Because shrinking logical volumes is harder than increasing their size, you may want to plan your configuration to leave some free space in your volume group. You can then allocate that space, if and when it becomes necessary, to whichever logical volume needs it.

Roderick W. Smith is the author or co-author of over a dozen books, including Linux in a Windows World and Linux Power Tools. He can be reached at class="emailaddress">rodsmith@rodsbooks.com.

Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62