<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Nested-RAID: The Triple Lindy</title>
	<atom:link href="http://www.linux-mag.com/id/7932/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7932/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: Andrew Dodd</title>
		<link>http://www.linux-mag.com/id/7932/#comment-11821</link>
		<dc:creator>Andrew Dodd</dc:creator>
		<pubDate>Tue, 25 Oct 2011 17:53:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-11821</guid>
		<description>Why not RAID 6 within RAID 6? or RAID 66EE inside RAID66EE? Nested in a 3-d manner and controlled by a low-cost 3d processor such as those found on graphics cards?</description>
		<content:encoded><![CDATA[<p>Why not RAID 6 within RAID 6? or RAID 66EE inside RAID66EE? Nested in a 3-d manner and controlled by a low-cost 3d processor such as those found on graphics cards?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: rrohbeck</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9038</link>
		<dc:creator>rrohbeck</dc:creator>
		<pubDate>Wed, 02 Mar 2011 22:27:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9038</guid>
		<description>Some of these make a lot of sense when you consider that bus or controller throughput is often the bottleneck when you run large arrays. In many of our systems we run 16 drives per controller in two RAID5 or RAID6 groups of 8 each, with up to 4 controllers, and everything striped together in software. That would make the systems RAID500 or RAID600. 
I also run a file server with RAID55. Yeah that&#039;s overkill but I&#039;m not only protected from dual drive failure but also from a failed array [connection] or power loss on one array. That server is mostly read from so performance isn&#039;t an issue.</description>
		<content:encoded><![CDATA[<p>Some of these make a lot of sense when you consider that bus or controller throughput is often the bottleneck when you run large arrays. In many of our systems we run 16 drives per controller in two RAID5 or RAID6 groups of 8 each, with up to 4 controllers, and everything striped together in software. That would make the systems RAID500 or RAID600.<br />
I also run a file server with RAID55. Yeah that&#8217;s overkill but I&#8217;m not only protected from dual drive failure but also from a failed array [connection] or power loss on one array. That server is mostly read from so performance isn&#8217;t an issue.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: rikjwells</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9037</link>
		<dc:creator>rikjwells</dc:creator>
		<pubDate>Wed, 02 Mar 2011 21:19:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9037</guid>
		<description>When considering the addition of a controller-oriented layer could there not be an additional redundancy introduced by mirroring the controllers?  RAID 101 perhaps?</description>
		<content:encoded><![CDATA[<p>When considering the addition of a controller-oriented layer could there not be an additional redundancy introduced by mirroring the controllers?  RAID 101 perhaps?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: rikjwells</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9036</link>
		<dc:creator>rikjwells</dc:creator>
		<pubDate>Wed, 02 Mar 2011 21:13:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9036</guid>
		<description>I would think you might be able to arrange the drives in a RAID 01, building similarly-sized RAID 0 groups to mirror.  Re-use being the higer priority for this application than performance or maintainability ;-)</description>
		<content:encoded><![CDATA[<p>I would think you might be able to arrange the drives in a RAID 01, building similarly-sized RAID 0 groups to mirror.  Re-use being the higer priority for this application than performance or maintainability ;-)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: arenasa</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9035</link>
		<dc:creator>arenasa</dc:creator>
		<pubDate>Wed, 02 Mar 2011 20:31:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9035</guid>
		<description>About the cons for RAID 100...Actually I can loose 6 disks without loosing data access.. if I am lucky enough to loose just one disk of every RAID 1 array... is that correct?.</description>
		<content:encoded><![CDATA[<p>About the cons for RAID 100&#8230;Actually I can loose 6 disks without loosing data access.. if I am lucky enough to loose just one disk of every RAID 1 array&#8230; is that correct?.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: amadensor</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9032</link>
		<dc:creator>amadensor</dc:creator>
		<pubDate>Wed, 02 Mar 2011 16:03:50 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9032</guid>
		<description>I am doing something similar to nested RAID, but not quite.   I use RAID-1 arrays for redundancy, but then I use LVM to do the striping.   I can stripe across more than the RAID controller will handle, negating the need for RAID 100 (or 100000000) while still retaining all of the benefits, and gaining the ability to throw more storage at it in the future if needed.</description>
		<content:encoded><![CDATA[<p>I am doing something similar to nested RAID, but not quite.   I use RAID-1 arrays for redundancy, but then I use LVM to do the striping.   I can stripe across more than the RAID controller will handle, negating the need for RAID 100 (or 100000000) while still retaining all of the benefits, and gaining the ability to throw more storage at it in the future if needed.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: davidbrown</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9031</link>
		<dc:creator>davidbrown</dc:creator>
		<pubDate>Wed, 02 Mar 2011 14:18:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9031</guid>
		<description>Raid 100 does have some practical use - it allows larger scale deployments with more disks than you can achieve using Raid 10 on a controller.  But it is not a question of performance - Raid 0 takes almost no processing power for either a host processor or a hardware raid card.  Your twelve drive RAID100 layout using 3 cards with 4 disks will give worse performance than RAID10 on a single card with 12 disks.  (For a twelve drive RAID10, the fastest solution is to run all the disks as individual disks and using Linux software Raid 10 with far layout.  However, running RAID10 on hardware cards might be slightly faster when degraded or rebuilding.)  But if you want a 48 drive RAID10 setup, you don&#039;t get big enough raid cards - therefore you use RAID100.

Raid 160 is an interesting arrangement - but again, it is mainly about scalability.  It has no real-world advantages over Raid 16 except when you want to have a very large number of drives.  I haven&#039;t heard of Raid 16 being used in practice - if Raid 15 doesn&#039;t give you enough protection, you probably want redundant clustered file systems anyway.  While the calculation of the Raid 5 parity is easy for modern processors, Raid 6 has not insignificant costs in processor time and memory bandwidth - it is worth the cost when comparing Raid 6 to Raid 5, but it&#039;s a different balance for Raid 16 vs. Raid 15.

The idea of using cards that support Raid 16 directly is nice in theory - but do you actually know of any cards that support Raid 16 - or even Raid 15?  I have never heard of any.


Generally speaking, triple-layer raid is about scalability, not extra redundancy or performance (as compared to a two-layer solution).  The same applies to a lot of two-layer raids - Raid 11, for example, is meaningless - you are better off using a single 4x mirror Raid 1.</description>
		<content:encoded><![CDATA[<p>Raid 100 does have some practical use &#8211; it allows larger scale deployments with more disks than you can achieve using Raid 10 on a controller.  But it is not a question of performance &#8211; Raid 0 takes almost no processing power for either a host processor or a hardware raid card.  Your twelve drive RAID100 layout using 3 cards with 4 disks will give worse performance than RAID10 on a single card with 12 disks.  (For a twelve drive RAID10, the fastest solution is to run all the disks as individual disks and using Linux software Raid 10 with far layout.  However, running RAID10 on hardware cards might be slightly faster when degraded or rebuilding.)  But if you want a 48 drive RAID10 setup, you don&#8217;t get big enough raid cards &#8211; therefore you use RAID100.</p>
<p>Raid 160 is an interesting arrangement &#8211; but again, it is mainly about scalability.  It has no real-world advantages over Raid 16 except when you want to have a very large number of drives.  I haven&#8217;t heard of Raid 16 being used in practice &#8211; if Raid 15 doesn&#8217;t give you enough protection, you probably want redundant clustered file systems anyway.  While the calculation of the Raid 5 parity is easy for modern processors, Raid 6 has not insignificant costs in processor time and memory bandwidth &#8211; it is worth the cost when comparing Raid 6 to Raid 5, but it&#8217;s a different balance for Raid 16 vs. Raid 15.</p>
<p>The idea of using cards that support Raid 16 directly is nice in theory &#8211; but do you actually know of any cards that support Raid 16 &#8211; or even Raid 15?  I have never heard of any.</p>
<p>Generally speaking, triple-layer raid is about scalability, not extra redundancy or performance (as compared to a two-layer solution).  The same applies to a lot of two-layer raids &#8211; Raid 11, for example, is meaningless &#8211; you are better off using a single 4x mirror Raid 1.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dragonwisard</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9030</link>
		<dc:creator>dragonwisard</dc:creator>
		<pubDate>Wed, 02 Mar 2011 14:15:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9030</guid>
		<description>How can I maximize storage efficiency and redundancy across asymmetrical disks? I have a heterogeneous bunch of old drives (many pulled from dead systems) that I would like to attach to my NAS. I&#039;ve seen proprietary solutions like Drobo, but is there anything free or open source?</description>
		<content:encoded><![CDATA[<p>How can I maximize storage efficiency and redundancy across asymmetrical disks? I have a heterogeneous bunch of old drives (many pulled from dead systems) that I would like to attach to my NAS. I&#8217;ve seen proprietary solutions like Drobo, but is there anything free or open source?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ken Hess</title>
		<link>http://www.linux-mag.com/id/7932/#comment-9029</link>
		<dc:creator>Ken Hess</dc:creator>
		<pubDate>Wed, 02 Mar 2011 13:36:18 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=7932#comment-9029</guid>
		<description>I&#039;ll bet that hardly anyone gets that Triple Lindy thing. A reference from way back and way geeky. Good job.</description>
		<content:encoded><![CDATA[<p>I&#8217;ll bet that hardly anyone gets that Triple Lindy thing. A reference from way back and way geeky. Good job.</p>
]]></content:encoded>
	</item>
</channel>
</rss>