<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Pick Your Pleasure: RAID-0 mdadm Striping or LVM Striping?</title>
	<atom:link href="http://www.linux-mag.com/id/7582/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7582/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: foo</title>
		<link>http://www.linux-mag.com/id/7582/#comment-387493</link>
		<dc:creator>foo</dc:creator>
		<pubDate>Sat, 08 Sep 2012 06:28:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-387493</guid>
		<description>yes thats wright! it is 9 mb. wonderful, thanks!</description>
		<content:encoded><![CDATA[<p>yes thats wright! it is 9 mb. wonderful, thanks!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: zhoux</title>
		<link>http://www.linux-mag.com/id/7582/#comment-269233</link>
		<dc:creator>zhoux</dc:creator>
		<pubDate>Fri, 29 Jun 2012 10:19:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-269233</guid>
		<description>I think the author could make a mistake about the capacity of raid0, which he claimed to be computed from the smallest disk size among the disks in the group, multiplied by the number of drives in the group. 

From my experiment under debian Lenny, using virtual disk made from /dev/zero, the result should be the sum of every disk of the group, which is exactly the same to lvm2. I made 2 virtual disks of 3Mb and 6Mb, the result capacity of raid0 is 9Mb, rather than 6Mb.

BTW, The mdadm version is v 2.6.7.2, linux kernel 2.6.26</description>
		<content:encoded><![CDATA[<p>I think the author could make a mistake about the capacity of raid0, which he claimed to be computed from the smallest disk size among the disks in the group, multiplied by the number of drives in the group. </p>
<p>From my experiment under debian Lenny, using virtual disk made from /dev/zero, the result should be the sum of every disk of the group, which is exactly the same to lvm2. I made 2 virtual disks of 3Mb and 6Mb, the result capacity of raid0 is 9Mb, rather than 6Mb.</p>
<p>BTW, The mdadm version is v 2.6.7.2, linux kernel 2.6.26</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: joe</title>
		<link>http://www.linux-mag.com/id/7582/#comment-14275</link>
		<dc:creator>joe</dc:creator>
		<pubDate>Sat, 29 Oct 2011 22:59:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-14275</guid>
		<description>Interesting read. Of course, mdadm and LVM were created for different reasons. mdadm for stabiliy/speed, LVM for maximum flexibility. But i still wondered how close LVM&#039;s striping capability comes to mdadm in terms of performance. btrfs will include both concepts already at the file system level, theoretically it should outperform a classic mdadm/LVM setup?</description>
		<content:encoded><![CDATA[<p>Interesting read. Of course, mdadm and LVM were created for different reasons. mdadm for stabiliy/speed, LVM for maximum flexibility. But i still wondered how close LVM&#8217;s striping capability comes to mdadm in terms of performance. btrfs will include both concepts already at the file system level, theoretically it should outperform a classic mdadm/LVM setup?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: markdean</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7178</link>
		<dc:creator>markdean</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7178</guid>
		<description>&lt;p&gt;As I understand it, this article is talking about software RAID as well as the lvm. Does anyone actually use software RAID outside of home use or temporary systems? And does anyone actually use RAID 0, software or hardware? If you do, you\&#039;ve doubled (or more depending on how man drives you use) your chances of loosing all your data from a drive failure.&lt;/p&gt;
&lt;p&gt; My usual setup is to use hardware RAID (usually 0+1 for small installation and 5 for larger datasets), generally with an HP SmartArray controller (PERC if Dell but I don\&#039;t use Dell often), and then use lvm on top of that to provide maximum flexibility.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>As I understand it, this article is talking about software RAID as well as the lvm. Does anyone actually use software RAID outside of home use or temporary systems? And does anyone actually use RAID 0, software or hardware? If you do, you\&#8217;ve doubled (or more depending on how man drives you use) your chances of loosing all your data from a drive failure.</p>
<p> My usual setup is to use hardware RAID (usually 0+1 for small installation and 5 for larger datasets), generally with an HP SmartArray controller (PERC if Dell but I don\&#8217;t use Dell often), and then use lvm on top of that to provide maximum flexibility.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ewildgoose</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7179</link>
		<dc:creator>ewildgoose</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7179</guid>
		<description>&lt;p&gt;People talk about hardware raid controllers as though they were a class in their own right.  In practice there are two flavours, those with battery backup (BBU) and those without...&lt;/p&gt;
&lt;p&gt;With battery backup, ie such as your PERC/HP Smart Array, the performance goes through the roof  (perhaps 10x the IO/s in some cases) simply because the card \&quot;lies\&quot; and says that the data has hit the disk when it\&#039;s really still in disk cache.  However, the card can aggregate the data, batch the writes, reorder the writes to minimise disk seeks and generally vastly improve write speeds (it should have little effect on read speeds)&lt;/p&gt;
&lt;p&gt;Without battery backup the hardware cards should make little difference either way (I\&#039;m sure any given test will show one better than the other mind).  There is simply no reason for hardware to win over software in general purpose use (lets assume less than 16 hard drives and ignore super large arrays, etc)  Fundamentally the read speed is set in stone, with perhaps a small amount of re-ordering possible to improve things, the write performance can be vastly improved by re-ordering and batching things, but re-ordering writes is simply not possible without a BBU (or at least not safely).&lt;/p&gt;
&lt;p&gt;So in general the cheaper hardware cards are of little value other than ease of setup (and the really cheap cards are usually actually software raid via a custom driver...)  And the expensive hardware cards will set your world on fire, but by definition they are expensive..&lt;/p&gt;
&lt;p&gt;I have used the HP Smart Array cards many years ago and for our database application they gave us at least a 10x speedup in IO/s simply by flicking the writeback cache on/off...  Stonking units
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>People talk about hardware raid controllers as though they were a class in their own right.  In practice there are two flavours, those with battery backup (BBU) and those without&#8230;</p>
<p>With battery backup, ie such as your PERC/HP Smart Array, the performance goes through the roof  (perhaps 10x the IO/s in some cases) simply because the card \&#8221;lies\&#8221; and says that the data has hit the disk when it\&#8217;s really still in disk cache.  However, the card can aggregate the data, batch the writes, reorder the writes to minimise disk seeks and generally vastly improve write speeds (it should have little effect on read speeds)</p>
<p>Without battery backup the hardware cards should make little difference either way (I\&#8217;m sure any given test will show one better than the other mind).  There is simply no reason for hardware to win over software in general purpose use (lets assume less than 16 hard drives and ignore super large arrays, etc)  Fundamentally the read speed is set in stone, with perhaps a small amount of re-ordering possible to improve things, the write performance can be vastly improved by re-ordering and batching things, but re-ordering writes is simply not possible without a BBU (or at least not safely).</p>
<p>So in general the cheaper hardware cards are of little value other than ease of setup (and the really cheap cards are usually actually software raid via a custom driver&#8230;)  And the expensive hardware cards will set your world on fire, but by definition they are expensive..</p>
<p>I have used the HP Smart Array cards many years ago and for our database application they gave us at least a 10x speedup in IO/s simply by flicking the writeback cache on/off&#8230;  Stonking units</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: stevenjacobs</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7180</link>
		<dc:creator>stevenjacobs</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7180</guid>
		<description>&lt;p&gt;Wow you must be really misinformed.&lt;br /&gt;
Hardware raid is just that, and performance is independent of whether it has battery backup or not.&lt;/p&gt;
&lt;p&gt;Obviously, when write cache is enabled for the controller (as it should to have a real performance benefit), you suffer the risk of data corruption when power is lost to the device.&lt;/p&gt;
&lt;p&gt;You are talking about PERC and SmartArray as if it has battery backup by default. This is obviously not the case for either of these, or any other hardware raid card for that matter. This is sold seperately, and available for any major brand.&lt;/p&gt;
&lt;p&gt;I can assure you, most users use the write cache even without the battery backup. Datacenters are quite reliable these days you know...
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Wow you must be really misinformed.<br />
Hardware raid is just that, and performance is independent of whether it has battery backup or not.</p>
<p>Obviously, when write cache is enabled for the controller (as it should to have a real performance benefit), you suffer the risk of data corruption when power is lost to the device.</p>
<p>You are talking about PERC and SmartArray as if it has battery backup by default. This is obviously not the case for either of these, or any other hardware raid card for that matter. This is sold seperately, and available for any major brand.</p>
<p>I can assure you, most users use the write cache even without the battery backup. Datacenters are quite reliable these days you know&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: typhoidmary</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7181</link>
		<dc:creator>typhoidmary</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7181</guid>
		<description>&lt;p&gt;I have always wondered how much difference you would see between hardware raid and software raid. Are there some decent benchmarks covering this?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>I have always wondered how much difference you would see between hardware raid and software raid. Are there some decent benchmarks covering this?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: duncan</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7182</link>
		<dc:creator>duncan</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7182</guid>
		<description>&lt;p&gt;Back to the article:  I think it was biased towards LVM.&lt;/p&gt;
&lt;p&gt;\&quot;The performance was actually fairly close except for small record sizes (1KB - 8KB) where RAID-0 was much better.\&quot;&lt;/p&gt;
&lt;p&gt;Like that doesn\&#039;t count?  for the 1K Read (KB/s) it was about a 32% increase.  I know that most files are not that small.  I think that if the OS was raided it would further magnify this difference.  The size and dependability of today\&#039;s drives minimizes the need for LVM.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Back to the article:  I think it was biased towards LVM.</p>
<p>\&#8221;The performance was actually fairly close except for small record sizes (1KB &#8211; 8KB) where RAID-0 was much better.\&#8221;</p>
<p>Like that doesn\&#8217;t count?  for the 1K Read (KB/s) it was about a 32% increase.  I know that most files are not that small.  I think that if the OS was raided it would further magnify this difference.  The size and dependability of today\&#8217;s drives minimizes the need for LVM.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: littlemonkeymojo</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7183</link>
		<dc:creator>littlemonkeymojo</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7183</guid>
		<description>&lt;p&gt;The command you used to create the mdadm RAID-0 array actually created a RAID-1 array.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>The command you used to create the mdadm RAID-0 array actually created a RAID-1 array.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: chadfarmer1</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7184</link>
		<dc:creator>chadfarmer1</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7184</guid>
		<description>&lt;p&gt;The write throughput of both RAID-0 and LVM are very similar.  It is the read throughput that really varies between the two.  Anyone have an idea why?  &lt;/p&gt;
&lt;p&gt;Write throughput is pretty flat starting at 8KB blocks.  Random reads and writes show a steady improvement with larger blocks. Is this because the random access negates the benefit of Linux\&#039;s disk cache?&lt;/p&gt;
&lt;p&gt;Even though it would have been outside the article\&#039;s topic, I would like to see the results of a non-striped test on the same hardware to show how much benefit there is from striping, whether RAID-0 or LVM.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>The write throughput of both RAID-0 and LVM are very similar.  It is the read throughput that really varies between the two.  Anyone have an idea why?  </p>
<p>Write throughput is pretty flat starting at 8KB blocks.  Random reads and writes show a steady improvement with larger blocks. Is this because the random access negates the benefit of Linux\&#8217;s disk cache?</p>
<p>Even though it would have been outside the article\&#8217;s topic, I would like to see the results of a non-striped test on the same hardware to show how much benefit there is from striping, whether RAID-0 or LVM.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dwolsten</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7185</link>
		<dc:creator>dwolsten</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7185</guid>
		<description>&lt;p&gt;ewildgoose:  I thought \&quot;BBU\&quot; stood for \&quot;Battlin\&#039; Business Units\&quot;.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>ewildgoose:  I thought \&#8221;BBU\&#8221; stood for \&#8221;Battlin\&#8217; Business Units\&#8221;.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: apiset</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7186</link>
		<dc:creator>apiset</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7186</guid>
		<description>&lt;p&gt;stevenjacobs: I think ewildgoose is not misinformed. I used to have a HP server with SmartArray card and no BBWC (Battery-Backed Write Cache), only read cache of 64MB (stock SmartArray 6i) so I can\&#039;t enable write-cache on the controller. The performance was horrible (only a few MB/s when doing heavy IO like when doing backup). After adding BBWC then the performance was more reasonable (30-40MB/s).
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>stevenjacobs: I think ewildgoose is not misinformed. I used to have a HP server with SmartArray card and no BBWC (Battery-Backed Write Cache), only read cache of 64MB (stock SmartArray 6i) so I can\&#8217;t enable write-cache on the controller. The performance was horrible (only a few MB/s when doing heavy IO like when doing backup). After adding BBWC then the performance was more reasonable (30-40MB/s).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: stevenjacobs</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7187</link>
		<dc:creator>stevenjacobs</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7187</guid>
		<description>&lt;p&gt;apiset: I haven\&#039;t had this restriction on any controller so far. We use 3Ware, Areca, PERC and they all allow write cache without a BBU.&lt;br /&gt;
It\&#039;s definately best practice to install one, but saying that it is required to have better performance is just not right.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>apiset: I haven\&#8217;t had this restriction on any controller so far. We use 3Ware, Areca, PERC and they all allow write cache without a BBU.<br />
It\&#8217;s definately best practice to install one, but saying that it is required to have better performance is just not right.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7188</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7188</guid>
		<description>&lt;p&gt;Hmm... I posted this yesterday but it didn\&#039;t show up...&lt;/p&gt;
&lt;p&gt;I fixed the raid-1 goof. Thanks for the catch typhoidmary. I checked the system and it was RAID-1. The raid-1 in the command line was a typo.&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Hmm&#8230; I posted this yesterday but it didn\&#8217;t show up&#8230;</p>
<p>I fixed the raid-1 goof. Thanks for the catch typhoidmary. I checked the system and it was RAID-1. The raid-1 in the command line was a typo.</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7189</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7189</guid>
		<description>&lt;p&gt;@duncan,&lt;/p&gt;
&lt;p&gt;You fell into the trap :)  The benchmarks were only single runs, consequently there is no measure of variability in the runs. So 30% might be a huge difference, but the variability could be larger than that. The problem is that we don\&#039;t just know. So it makes it difficult to compare (actually I think it\&#039;s impossible). &lt;/p&gt;
&lt;p&gt;But I included the results for fun.&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>@duncan,</p>
<p>You fell into the trap :)  The benchmarks were only single runs, consequently there is no measure of variability in the runs. So 30% might be a huge difference, but the variability could be larger than that. The problem is that we don\&#8217;t just know. So it makes it difficult to compare (actually I think it\&#8217;s impossible). </p>
<p>But I included the results for fun.</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mteixeira</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7190</link>
		<dc:creator>mteixeira</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7190</guid>
		<description>&lt;p&gt;Hardware RAID 0 should make a difference over software:&lt;/p&gt;
&lt;p&gt;a) CPU usage will be smaller (mainly with ATA/SATA disks)&lt;br /&gt;
b) bandwidth - the HW controller is, in principle, capable of comunicating independently with the disks and providing full speed (limited by the bus) to the chipset
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Hardware RAID 0 should make a difference over software:</p>
<p>a) CPU usage will be smaller (mainly with ATA/SATA disks)<br />
b) bandwidth &#8211; the HW controller is, in principle, capable of comunicating independently with the disks and providing full speed (limited by the bus) to the chipset</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: iiordanov</title>
		<link>http://www.linux-mag.com/id/7582/#comment-7191</link>
		<dc:creator>iiordanov</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7582/#comment-7191</guid>
		<description>&lt;p&gt;Thanks for the article, it was interesting. However, there is one point that needs to be corrected, because it certainly confused me.&lt;/p&gt;
&lt;p&gt;According to the lvcreate manpage, the &quot;-i&quot; option indicates across how many physical volumes (within the volume group) to scatter the logical volume. Hence, in your case, the number &quot;2&quot; that you gave to &quot;-i&quot; is not arbitrary at all. In fact, this is the only number that makes sense, since you had two physical volumes in your volume group.&lt;/p&gt;
&lt;p&gt;Here is the relevant part of the manpage:&lt;/p&gt;
&lt;p&gt;       -i, --stripes Stripes&lt;br /&gt;
              Gives the number of stripes.  This is equal to the number of physical volumes to scatter the logical volume.&lt;/p&gt;
&lt;p&gt;Thanks,&lt;br /&gt;
Iordan
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks for the article, it was interesting. However, there is one point that needs to be corrected, because it certainly confused me.</p>
<p>According to the lvcreate manpage, the &#8220;-i&#8221; option indicates across how many physical volumes (within the volume group) to scatter the logical volume. Hence, in your case, the number &#8220;2&#8243; that you gave to &#8220;-i&#8221; is not arbitrary at all. In fact, this is the only number that makes sense, since you had two physical volumes in your volume group.</p>
<p>Here is the relevant part of the manpage:</p>
<p>       -i, &#8211;stripes Stripes<br />
              Gives the number of stripes.  This is equal to the number of physical volumes to scatter the logical volume.</p>
<p>Thanks,<br />
Iordan</p>
]]></content:encoded>
	</item>
</channel>
</rss>