A fairly common Linux storage question: Which is better for data striping, RAID-0 or LVM? Let's take a look at these two tools and see how they perform data striping tasks.
The previous section contrasted RAID-0 and LVM from a conceptual perspective, but the question of which one is faster still remains (even if the question isn’t a good one). This section will present a performance comparison of RAID-0 using mdadm and LVM. However, in the interest of time it doesn’t follow our good benchmarking guidelines (a full set of benchmarks would take over 160 hours). In this case IOzone is used as the benchmark.
IOzone is run in two ways: (1) Throughput, and (2) IOPS. Also, only the write, read, random write, and random read tests are run but a range of record sizes will be tested. Unlike tests from previous articles, each test was only run 1 time using ext4. The test system used a stock CentOS 5.3 distribution but with a 2.6.30 kernel (from kernel.org) and e3fsprogs was upgraded to the latest version as of the writing of this article, 1.41.9. The tests were run on the following system:
- GigaByte MAA78GM-US2H motherboard
- An AMD Phenom II X4 920 CPU
- 8GB of memory
- Linux 2.6.30 kernel
- The OS and boot drive are on an IBM DTLA-307020 (20GB drive at Ultra ATA/100)
- /home is on a Seagate ST1360827AS
- There are two drives for testing. They are Seagate ST3500641AS-RK with 16 MB cache each. These are
Both drives, /dev/sdb and /dev/sdc, were used for all of the tests.
To help improve run times, 3 threads were used on the quad-core system. The fourth core was kept for the software RAID or LVM processing. So in the IOzone command lines the “-t 3″ option means that three threads were used. In addition, each thread had a size of 3GB, resulting in a total data size of 12GB. The important point is that the total amount of data is larger than memory (12GB > 8GB).
For the throughput tests, the following IOzone command line was used.
./iozone -Rb spreadsheet_ext4_write_and_read_1K_1.wks -i 0 -i 1 -i 2 -e -+n -r 1k -s 3G -t 3 > output_ext4_write_and_read_1K_1.txt
The command line is shown with a 1KB record size.
The IOPS tests used the following IOzone command line.
./iozone -Rb spreadsheet_ext4_write_and_read_1K_1.wks -i 0 -i 1 -i 2 -e -O -+n -r 1k -s 3G -t 3 > output_ext4_write_and_read_1K_1.txt
The RAID-0 array was constructed relying on defaults as shown in a previous article. The command used to construct the array was the following.
[root@test64 laytonjb]# mdadm --create --verbose /dev/md0 --level raid0 --raid-devices=2 /dev/sdb1 /dev/sdc1
The “chunk size”, or stripe width, defaults to 64KB.
To contrast RAID-0 and LVM they need to be constructed as similarly as possible. This is a bit more difficult in LVM since it is different than RAID. The basics of LVM were discussed in a previous article. After the physical volumes (PV’s) were created they were grouped into a single volume group.
[root@test64 laytonjb]# /usr/sbin/vgcreate primary_vg /dev/sdb1 /dev/sdc1
Volume group "primary_vg" successfully created
[root@test64 laytonjb]# /usr/sbin/vgdisplay
--- Volume group ---
VG Name primary_vg
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 931.52 GB
PE Size 4.00 MB
Total PE 238468
Alloc PE / Size 0 / 0
Free PE / Size 238468 / 931.52 GB
VG UUID yjkNSQ-416l-f5Bt-RZLt-38NH-8LT6-QfrjeJ
The key to stripe mapping in LVM is how the logical volume is created. For this article the number of stripes (“-i” option) was arbitrarily chosen to be 2, and the stripe width (“-I” option) was chosen to be 64KB to match RAID-0. The total size of the LV was arbitrarily chosen to be 465GB. The command line for creating the LV was the following.
[root@test64 laytonjb]# /usr/sbin/lvcreate -i2 -I64 --size 465G -n test_stripe_volume primary_vg /dev/sdb1 /dev/sdc1
Logical volume "test_stripe_volume" created
[root@test64 laytonjb]# /usr/sbin/lvdisplay
--- Logical volume ---
LV Name /dev/primary_vg/test_stripe_volume
VG Name primary_vg
LV UUID igTRtk-wcqn-YVzR-HNQh-Ki2b-HznC-HcW589
LV Write Access read/write
LV Status available
# open 0
LV Size 465.00 GB
Current LE 119040
Read ahead sectors auto
- currently set to 512
Block device 253:0
Then the file system is created using the logical volume test_stripe_volume.
[root@test64 laytonjb]# /sbin/mkfs -t ext4 /dev/primary_vg/test_stripe_volume
mke2fs 1.41.9 (22-Aug-2009)
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
30474240 inodes, 121896960 blocks
6094848 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
3720 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
RAID-0 and LVM Test Results
The two tables below present the throughput and IOPS results for both RAID-0 and LVM. Table 1 contains the throughput results.
Table 1 – Throughput Tests
Table 2 below contains the throughput results for both RAID-0 and LVM.
Table 2 – IOPS Tests
Even though the results did not follow our good benchmarking habits, which really limits our ability to make any conclusions, it is interesting to do a quick comparison.
- For both RAID-0 and LVM, as the record size increases, write throughput performance increases slightly and read performance remains about the same. Both random read and random write performance increases fairly dramatically as the record size increases.
- For both RAID-0 and LVM, as the record size increases write IOPS and read IOPS decreases dramatically (this is logical since you have fewer records reducing the IOPS). The same is true for random read IOPS and random write IOPS.
- Finally, while it is almost impossible to justify comparing RAID-0 and LVM performance, human nature will push it us to do a comparison. It appears as though RAID-0 offers a bit better throughput performance than LVM, particularly at the very small record sizes. The same is true for IOPS.
A fairly common question people ask is whether it is better to use data striping with RAID-0 (mdadm) or LVM. But in reality the two are different concepts. RAID is all about performance and/or data reliability while LVM is about storage and file system management. Ideally you can combine the two concepts but that’s the subject of another article or two.
In the interest of trying to answer the orignal question of which one is better, a quick test was run with IOzone. We did not use our good benchmarking skills in the interest of time, but the test results give some feel for the performance of both approaches. The performance was actually fairly close except for small record sizes (1KB – 8KB) where RAID-0 was much better.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62