Pick Your Pleasure: RAID-0 mdadm Striping or LVM Striping?

A fairly common Linux storage question: Which is better for data striping, RAID-0 or LVM? Let's take a look at these two tools and see how they perform data striping tasks.

Given the price of hard drives and the number of drives you can put into a single system albeit a desktop or a server, a very common question is how to arrange the drives to improve performance. Consequently, a somewhat commonly asked Linux storage question that you see on various mailing lists is, which is better for data striping, RAID-0 with mdadm or LVM? While many people will correctly point out that this argument is somewhat pointless because each is really intended for different tasks, the question is still fairly common. Nonetheless, in the quest for the best performance possible, there is still the question of which one is better (whether it’s meaningless or not). In this article both concepts will be contrasted in regard to performance with some discussion about appropriateness. To add at least a little chaos to the situation, some simple IOzone benchmarks with RAID-0 and LVM will be presented.

Data Striping

Data striping is a commonly used technique for improving performance. It breaks data into pieces that are assigned to various physical devices, usually storage devices, in a round-robin fashion. One of the reasons that this concept was developed is that processors are capable of generating IO (reads and writes) much faster than the storage device can store or recall it. But if you can split the data among multiple storage devices then you can perhaps improve IO performance.

The process is very simple. In the case of a write function, the incoming data is split into pieces with the first piece being sent to the first device, the second piece being sent to the second device, and so on until all the devices have received a data piece or all the data has been written. If there are still pieces of data to be written then the next piece is sent to the first device and the process continues (round-robin). Data throughput is improved because the system can send one piece of data to one storage device and immediately move on to the next piece of data and the next storage device without having to wait for the first one to complete. If you like, the data storage is parallelized. In Linux there are two primary ways to achieve this, RAID-0 and LVM.

RAID-0 with mdadm

One way to achieve data striping is to use RAID-0. Most people are probably familiar with the concept of RAID (Redundant Array of Inexpensive Disks) that seeks to divide data, possibly replicate it, and place it on storage devices. There are various techniques to achieve these goals and each one has a number associated with them such as RAID-0 or RAID-1. These details of the scheme define whether the emphasis is on either data reliability or increased throughput or possibly both.

RAID-0 is a scheme to improve data throughput by taking the data and spliting it evenly between multiple disks (data striping). Figure 1 below, from Wikipedia, shows how data is split across two disks.

Figure 1 – Diagram of RAID-0 layout of Two Drives

In this example, the first data piece, A1, is sent to disk 0, the second piece, A2, is sent to disk 1, and so on.

There are two terms that help define the properties of RAID-0.

  • Stripe Width
    This is the number of stripes that can be read to or written from, at the same time. Very simply, this is the number of drives used in the RAID-0 group. In example 1 the stripe width is 2.
  • Stripe Size
    The phrase refers to the size of the stripes on each drive. The phrases block size, chunk size, stripe length or granularity will sometimes be used in place of stripe size but they are all equivalent.

RAID-0 can, in many cases, help IO performance because of the data striping (parallelism). If the data is smaller than the stripe size (chunk size) then it will be written to only one disk not taking advantage of the striping. But if the data size is greater than the stripe size, then read/write performance should increase because of the ability to use more than one disk for a read or write. Increasing the stripe width adds more disks and can improve read/write performance if the stripe width (chunk size) is greater than the data size.

Mdadm (pronounced “m-d-adam”) is a tool for Linux for managing software RAID devices in Linux. It has seven modes of operation that pretty much cover any possible task you might use software RAID.

  • Assemble
  • Build
  • Create
  • Monitor
  • Grow
  • Manage
  • Misc

Mdadm is an all-purpose RAID management tool for Linux with a long history.

LVM Striping

LVM has been discussed in a previous article about managing pools of data. As discussed, it is an extraordinary useful tool for managing storage. Fundamentally it allows you to collect physical storage devices and combine them into virtual devices (volume groups) that can be divided into logical partitions (logical volumes) that are then used as the device for file systems. You can add or subtract devices from the virtual devices (volume group) or even move them as needed. Coupling these techniques with file systems that can be resized and you have a terribly efficient way of growing or moving file systems as needed.

In addition, LVM is very flexible allowing you to control exactly how the physical devices are combined into the volume groups (VGs) and the logical volumes (LVs). It is this flexibility that allows you to do data striping. In LVM this is called striped mapping. Figure 2 below illustrates this concept.

Figure 2 – Striped Mapping in LVM

Striped mapping maps the physical volumes (typically the drives) to the logical volume that is then used as the basis of the file system. LVM takes the first few stripes from the first physical volume (PV0) and maps them to the first stripes on the logical volume (LV0). Then it takes the first few stripes from the next physical volume (PV1) and maps them to the next stripes in LV0. The next stripes are taken from PV0 and mapped to LV0 and so on until the stripes on PV0 and PV1 are all allocated to the logical volume, LV0.

The advantage of the striped mapping is similar to RAID-0. When data is read from or written to the file system and if the data is large enough, it spans multiple stripes so that both physical devices can be used, improving performance.

Contrasting RAID-0 and LVM

From the previous discussions it is obvious that both RAID-0 and LVM achieve improved performance because of data striping across multiple storage devices. So in that respect they are the same. However, LVM and RAID are used for different purposes, and in many cases are used together. Let’s look at both techniques from different perspectives.

The size (capacity) of a RAID-0 group is computed from the smallest disk size among the disks in the group, multiplied by the number of drives in the group. For example, if you have two drives where one drive is 250GB in size and the second drive is 200GB, then the RAID-0 group is 400GB in size, not 450GB. So RAID-0 does not allow you to use the entire space of each drive if they are different sizes.

On the other hand, LVM allows you to combine all of the space in all of the drives into a single virtual space. You can use stripe mapping across the drives as you would in RAID-0, with the capacity being the same as RAID-0. However, LVM allows you to also use the remaining space for additional volume groups (VGs).

In the case of mdadm and software RAID-0 on Linux, you cannot grow a RAID-0 group. You can only grow a RAID-1, RAID-5, or RAID-6 array. This means that you can’t add drives to an existing RAID-0 group without having to rebuild the entire RAID group but having to restore all the data from a backup.

However, with LVM you can easily grow a logical volume. But, you cannot use stripe mapping to add a drive to an existing striped logical volume because you can’t interleave the existing stripes with the new stripes. This link explains it fairly concisely.

“In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set.”

Despite not being able to maintain a striped mapping in LVM, you can easily add space to a strpped logical volume.

This article, written by the original developers of LVM for Linux, present four advantages of LVM.

  1. Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtime associated with adding or deleting storage from a Linux server
  2. Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible
  3. Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)
  4. Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowing accurate backups to proceed simultaneously with regular system operation

These four advantages point to the fact that LVM is designed for ease of management rather than performance.

Performance Comparison of RAID-0 and LVM Striped Mapping

Comments on "Pick Your Pleasure: RAID-0 mdadm Striping or LVM Striping?"


As I understand it, this article is talking about software RAID as well as the lvm. Does anyone actually use software RAID outside of home use or temporary systems? And does anyone actually use RAID 0, software or hardware? If you do, you\’ve doubled (or more depending on how man drives you use) your chances of loosing all your data from a drive failure.

My usual setup is to use hardware RAID (usually 0+1 for small installation and 5 for larger datasets), generally with an HP SmartArray controller (PERC if Dell but I don\’t use Dell often), and then use lvm on top of that to provide maximum flexibility.


People talk about hardware raid controllers as though they were a class in their own right. In practice there are two flavours, those with battery backup (BBU) and those without…

With battery backup, ie such as your PERC/HP Smart Array, the performance goes through the roof (perhaps 10x the IO/s in some cases) simply because the card \”lies\” and says that the data has hit the disk when it\’s really still in disk cache. However, the card can aggregate the data, batch the writes, reorder the writes to minimise disk seeks and generally vastly improve write speeds (it should have little effect on read speeds)

Without battery backup the hardware cards should make little difference either way (I\’m sure any given test will show one better than the other mind). There is simply no reason for hardware to win over software in general purpose use (lets assume less than 16 hard drives and ignore super large arrays, etc) Fundamentally the read speed is set in stone, with perhaps a small amount of re-ordering possible to improve things, the write performance can be vastly improved by re-ordering and batching things, but re-ordering writes is simply not possible without a BBU (or at least not safely).

So in general the cheaper hardware cards are of little value other than ease of setup (and the really cheap cards are usually actually software raid via a custom driver…) And the expensive hardware cards will set your world on fire, but by definition they are expensive..

I have used the HP Smart Array cards many years ago and for our database application they gave us at least a 10x speedup in IO/s simply by flicking the writeback cache on/off… Stonking units


Wow you must be really misinformed.
Hardware raid is just that, and performance is independent of whether it has battery backup or not.

Obviously, when write cache is enabled for the controller (as it should to have a real performance benefit), you suffer the risk of data corruption when power is lost to the device.

You are talking about PERC and SmartArray as if it has battery backup by default. This is obviously not the case for either of these, or any other hardware raid card for that matter. This is sold seperately, and available for any major brand.

I can assure you, most users use the write cache even without the battery backup. Datacenters are quite reliable these days you know…


I have always wondered how much difference you would see between hardware raid and software raid. Are there some decent benchmarks covering this?


Back to the article: I think it was biased towards LVM.

\”The performance was actually fairly close except for small record sizes (1KB – 8KB) where RAID-0 was much better.\”

Like that doesn\’t count? for the 1K Read (KB/s) it was about a 32% increase. I know that most files are not that small. I think that if the OS was raided it would further magnify this difference. The size and dependability of today\’s drives minimizes the need for LVM.


The command you used to create the mdadm RAID-0 array actually created a RAID-1 array.


The write throughput of both RAID-0 and LVM are very similar. It is the read throughput that really varies between the two. Anyone have an idea why?

Write throughput is pretty flat starting at 8KB blocks. Random reads and writes show a steady improvement with larger blocks. Is this because the random access negates the benefit of Linux\’s disk cache?

Even though it would have been outside the article\’s topic, I would like to see the results of a non-striped test on the same hardware to show how much benefit there is from striping, whether RAID-0 or LVM.


ewildgoose: I thought \”BBU\” stood for \”Battlin\’ Business Units\”.


stevenjacobs: I think ewildgoose is not misinformed. I used to have a HP server with SmartArray card and no BBWC (Battery-Backed Write Cache), only read cache of 64MB (stock SmartArray 6i) so I can\’t enable write-cache on the controller. The performance was horrible (only a few MB/s when doing heavy IO like when doing backup). After adding BBWC then the performance was more reasonable (30-40MB/s).