Storage Pools and Snapshots with Logical Volume Management

Logical Volume Management (LVM) on Linux: A great tool for creating pools of storage hardware that can be divided, resized, or used for snapshots.

Storage has gotten so cheap that 1TB bare drives are now costing less than $100 and 2TB drives appearing. Plus everyone knows that the amount of data always increases. This combination indicates that you should be prepared to manage your storage for growth.

This is true even for home users with a simple desktop and it’s true for enterprise sites with tons of storage and very tight requirements on service levels. Linux has a great tool called LVM that can be used to effectively manage storage to make expansion, snapshots, and other aspects of storage fairly easy.

This article will be a review for people already familiar with LVM. However, it may also be something new to people who haven’t used it or perhaps used it without knowing (many distributions use it during installation). But perhaps even more important, this article will talk about performing snapshots with LVM. Some people may know about taking snapshots with LVM but it is a very powerful feature that can ease backups, disaster recovery, off-site backups, etc. If it isn’t something you’ve done before then this article may entice you to think about the snapshot feature of LVM.

This article isn’t intended to be a complete review or tutorial of LVM. There are a number of topics such as the interaction of LVM with RAID including the MD capability of Linux, that aren’t covered in this article. If this is something you want covered in future articles, please let the author know at jlayton _at_

Introduction to LVM

LVM (Logical Volume Management) is an extremely handy tool to learn because it allows storage to be handled in a very easy manner. The Linux implementation of LVM was written a number of years ago by Sistina and was open-sourced to the Linux community. LVM allows you to abstract various pieces of physical storage into groups that can be carved into chunks that form the basis of file systems (virtual partitions if you like). It also allows you to combine physical partitions into groups, resize these groups (grow or shrink), and effectively manage these groups.

There are a large number of good tutorials introducing LVM. The purpose of this article is not to create yet another tutorial (YAT) but to give you a brief overview of LVM because it is so important to effective storage management. The article may be review for you or it may cover new material. If it’s new material, then it should prove to be a good starting point for further reading and learning. If it’s a review, then it never hurts to review material you are familiar with – it can remind you of the basics of storage management, it may jog a somewhat forgotten command, or it may provoke you into writing about your experiences with LVM.

There is some debate about whether LVM is good or appropriate for a desktop but with the explosion of inexpensive storage, it is fairly easy to find desktops with multiple hard drives. Using LVM on a laptop is perhaps not the best place to use LVM since they are, for the most part, limited to a single internal drive. However, since disks are so cheap and have huge capacities it is a good idea to at least understand the concepts of LVM in case you would like to add drives to your desktop or server.

LVM Concepts

There are some fundamental concepts in LVM that you need to understand (and master). Figure 1 below illustrates these concepts.

Figure 1 - LVM Concepts
Figure 1 – LVM Concepts

At the bottom are the physical drives – in this case there are two drives, /dev/sdb and /dev/sdc. Each of these drives has two partitions. These two pieces form the physical basis of LVM. LVM itself sits above the physical devices and is shown in the light yellow area.

The physical partitions map to Physical Volumes (PV’s). So in Figure 1 the physical volumes are /dev/sdb1, /dev/sdb2, /dev/sdc1, and /dev/sdc2. From the Physical Volumes (PV’s), a Volume Group (VG) is created. A VG can use all or just one of the PV’s. In the example in Figure 1, the four PV’s are used in a single VG, primary_vg (note: the VG name actually ends in “_vg” to make it more noticeable). After creating one or more VG’s (Volume Groups), a Logical Volume (LV) is created. You have to have at least one Logical Volume (LV) per Volume Group (VG). These LV’s are what will be used for creating file systems. In Figure 1, the VG is broken into two LV’s: /dev/primary_vg/home_lv and /dev/primary_vg/data_lv (again note the use of the “_lv” at the end of the name to better signify a LV). Then on top of these these two PV’s are the file systems. For /dev/primary_vg/home_lv an ext3 file system is created and mounted as /home. For /dev/primary_vg/data_lv an xfs file system is created and mounted as /data.

So Physical Volumes (PV’s) are on the bottom of the stack and are really partitions on a disk (hence the use of the term “physical”). Above the PV’s is one or more Volume Groups (VG). Then you partition the VG into Logical Volumes (LV’s). These LV’s are used for creating file systems.

Perhaps not the best way to think about LVM but one concept that can be useful is to think of the Volume Group (VG) as the “virtual hard drive”. It gathers real partitions (PV’s) into a virtual drive (the VG). Then you partition the “virtual drive” and the partitions are called Logical Volumes (LV’s). You can think of the LV’s as “virtual partitions.” Then you can create file systems or whatever else you need to do with the LV’s.

Before getting into the commands for LVM, the first thing you have to do is make sure the partition type for the physical volumes is correct. The partition type should be “8e”. When you list the partitions using fdisk, the partitions that are PV’s should read “8e” which corresponds to “Linux LVM” as shown below.

# fdisk -l /dev/sdb

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       30400   244187968+  8e  Linux LVM
/dev/sdb2           30401       60801   244196032+  8e  Linux LVM

Notice the partition type. This has to be done for every partition on the drives being used with LVM.

Once the PV's are created, using LVM is fairly easy by just knowing a few commands (you can always use Google or the man pages for details on the options for the commands because they are quite extensive). The first command is how to make the physical volumes (PV's) using the command "pvcreate".

# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdc1 /dev/sdc2
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created
  Physical volume "/dev/sdc1" successfully created
  Physical volume "/dev/sdc2" successfully created

If you like, you can get more extensive information about the PV's with a simple command, "pvdisplay".

# pvdisplay
  "/dev/sdb1" is a new physical volume of "232.88 GB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name
  PV Size               232.88 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               j4XKT6-OI2y-cPMK-YpNR-gmgc-es67-ey4CV2

  "/dev/sdb2" is a new physical volume of "232.88 GB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb2
  VG Name
  PV Size               232.88 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               NPep2U-1Sod-4cu7-ueih-1G5P-LUPP-GANdub

  "/dev/sdc1" is a new physical volume of "232.88 GB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc1
  VG Name
  PV Size               232.88 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               qLC1pM-y07j-3Pq4-waiV-BRzc-vySF-KTesHH

  "/dev/sdc2" is a new physical volume of "232.88 GB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc2
  VG Name
  PV Size               232.88 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               gQn74n-YD1P-JMjE-eKen-4LoG-lpAz-rxRh3Y

Notice that each PV is given a PV UUID. This is a unique address that identifies the PV. Also notice that it tells you if the PV has been assigned to a VG (Volume Group). In this case, none of the PV's have been assigned to a VG.

The next step is to create the VG (Volume Group) from the PV's using the command "vgcreate".

# vgcreate primary_vg /dev/sdb1 /dev/sdb2 /dev/sdc1 /dev/sdc2
  Volume group "primary_vg" successfully created

Notice that the first argument to vgcreate is the name of the VG (in this case, primary_vg). Similarly to "pvdisplay" there is a "vgdisplay" command.

# vgdisplay
  --- Volume group ---
  VG Name               primary_vg
  System ID
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               931.52 GB
  PE Size               4.00 MB
  Total PE              238468
  Alloc PE / Size       0 / 0
  Free  PE / Size       238468 / 931.52 GB
  VG UUID               oNH6jk-PBE0-mR0c-aaDi-3Fys-y5SQ-0tVaxX

If you don't happen to remember the name of a VG or you happen upon a new system to administer, then you can use a command called "vgscan" that will tell what VG's are on the system.

# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "primary_vg" using metadata type lvm2

This command will scan all disks for VG's. This is very useful if you've forgotten the layout of the PV's and VG's. It's also HIGHLY recommended that you keep written notes of the LVM layout in case someone has to pick up administration of the system.

The next step after creating the VG is to partition it or create LV's using the command "lvcreate". For this example, only a single LV will be created.

# lvcreate --name /dev/primary_vg/home_lv --size 450G
  Logical volume "home_lv" created

The name used in the lvcreate command is the full path of the LV (this is another reason it is a good idea to keep really good notes). You can get information on the LV using the command "lvdisplay".

# lvdisplay
  --- Logical volume ---
  LV Name                /dev/primary_vg/home_lv
  VG Name                primary_vg
  LV UUID                BXXUyW-z8vS-6tgR-N5SW-tttW-j6Yb-6XEyya
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                450.00 GB
  Current LE             115200
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Notice that it tells you the VG that is used for the LV ("primary_vg" in this case). There is lots of other information in the output as well. It's highly recommended you look at the man pages and tutorials on the web to learn this information.

As with VG's, you can always scan for the Logical Volumes using "lvscan".

# lvdisplay
# lvscan
  ACTIVE            '/dev/primary_vg/home_lv' [450.00 GB] inherit

The output from the command with the default options is fairly simple but there is a "verbose" option that will give you more information.

At this point, you are ready to create a file system on this LV. In this case, an ext3 file system will be created.

# mkfs.ext3 /dev/primary_vg/home_lv
mke2fs 1.41.7 (29-June-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
29491200 inodes, 117964800 blocks
5898240 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
3600 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Notice that the "partition" used is the full path to the LV, /dev/primary_vg/home_lv. After the file system is created, you can mount the file system as you would any other.

Next: LVM Snapshots

Comments on "Storage Pools and Snapshots with Logical Volume Management"


GUI Tools

The command line tools for LVM are not too complicated but for novices it can be a bit daunting. So to help here are 3 GUI tools for LVM.


EVMS? eh? this isn\’t a gui tool it is an alternative volume management systems written mostly by IBM.


Actually, if you look at the EVMS webpage (linked in the article) you\’ll see that EVMS may be an alternative VMS, but it was also written (along with a GUI) to work with Linux LVM and Linux MD/Software RAID devices.


Great article! The only thing that I thought was slightly unclear was the size heuristic for the snapshot sizing. The referenced method allows for slightly more than 100% change in existing data and requires the VG to contain at least that much free space. While there is definitely a value in such a large COW cache, it is not always possible to have that much coverage. For sake of clarity, it bears mention that the \”size\” given to the snapshot is the cache for changes made to data existing at the time of the snapshot, and does not have to be nearly as large as the existing data, but large enough to cover the changes incurred during the expected life of the snapshot.
It also may have been good to cover the removal of the snapshot as well, although any user delving into LVM should learn lvremove quickly.

Next up LVM+MD?


This shows an example of combining physical partitions into a VG. Is there a good reason to divide a physical device (/dev/sdb1, /dev/sdb2) and then recombine those partitions into /dev/primary_vg?


This was a good review. Here is another tip if you plan to use LVM effectively. These suggestions will vary based on the distribution that you use, so KNOW that.

I tend to split up the system so that it is easier to manage file systems that tend to grow often. For instance if you take the OS, and separate the DATA, then you might end up with something like this.


and the data you may put in a completely different VG like so.

/dev/data/tmp * some people may choose to do this for systems that do a lot of video and sound editing.

I have experimented, and if you design your servers will then you can utilize your space better, and adapt to any storage issues that may present themselves in the future. I try to leave extra unallocated space in each volume group just in case a file system fills up, then this will allow you to extend the logical volume and the file system resides on that volume. Plan which filesystems you use carefully! Some filesystems (and distribution admin tools) will allow you to extend logical volume and filesystem in one command while others could turn out to be an ordeal.

Hope that helps someone.


Why doesn\’t the article mention the performance penalties involved with lvm snapshots ? For us this was the main reason not to build storage boxes based on linux, instead we use Opensolaris / ZFS for these solutions now. It is expected these problems will be solved when BRTFS is commonly available.


Thanks everyone for the comments and the suggestions I greatly appreciate.

Here are some quick replies and/or comments:

The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\’s to improve performance. Then again you could have used MD to create RAID groups and also improved performance. I\’ve never dug into the interaction between LVM and MD so I don\’t know what the \”best\” configuration is (however you want to define \”best\”).

Cool approach! I think it takes some work to setup everything as you describe but it does give you much more flexibility than a monolithic approach. Very cool – Thanks!

The article wasn\’t intended to be an all-encompassing review of LVM with all of the pitfalls and benefits (I think I mentioned that twice which someone pointed out was bad writing – but oh well, I wanted to make sure my point came across). However, the performance penalty doesn\’t really come from LVM. It comes from the fact that the file system has no snapshot capability built in. Consequently, you have to reply on LVM to take the snapshot for you. This forces you to \”freeze\” the file system (so to speak) during the snapshot and means that the file system takes a performance hit during the snapshot.

I also have to be the bearer of bad tidings, but even ZFS suffers a little when a snapshot is taken. The performance penalty is less than having to use ZFS but it\’s still there.

The only time you won\’t see a performance penalty due to a snapshot is for a log based file system because it is designed for snapshots. That\’s one of the really cool features of log based file systems.

I hope that answers some questions. If it doesn\’t feel free to repost and we\’ll figure out the answer together. If it even gets too long, we can write a quick article about the question and the solution (hint, hint) :)





My point about performance wasn\’t really about the moment the snapshot was taken but more afterwards. A logical volume with one or more snapshots chained to it performs a lot slower.


Sorry to be late replying. You said:
- – - – -
The example I used in the article was a bit artificial but I wanted to show how you could combine partitions or disks into a VG. One advantage of splitting a disk into partitions and then combining them with LVM is that you could have created stripes across the PV\’s to improve performance. Then again you could have used MD to create RAID groups and also improved performance.
- – - – -
Surely you realize that a single disk drive has only one head mechanism. Creating stripes across partitions of one drive can only cause time-consuming head-seek motion. This is definitely not a performance improver.



Sorry I should have said \”disks\”.

You are correct for a disk (that is the singular form). It\’s not always a good idea (there are situations where it might be reasonable).

For multiple disks (that\’s plural indicating more than one) can give you a performance advantage but it too depends upon the exact configuration.



You might be actually a good site owner. The internet site loading tempo is remarkable. It appears that you will be carrying out any kind of unique trick. On top of that, This material will be masterpiece. you have completed an amazing activity within this subject!


Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>