<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Intro to Nested-RAID: RAID-01 and RAID-10</title>
	<atom:link href="http://www.linux-mag.com/id/7928/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7928/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: Serhiy</title>
		<link>http://www.linux-mag.com/id/7928/#comment-618145</link>
		<dc:creator>Serhiy</dc:creator>
		<pubDate>Mon, 03 Dec 2012 01:00:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-618145</guid>
		<description>In this map
Disk 1    Disk 2    Disk 3    Disk 4
------    ------    ------    ------
  A1        A1        A2        A2
  A3        A3        A4        A4
  A5        A5        A6        A6
  A7        A7        A8        A8
  ..        ..        ..        ..
  A2        A2        A1        A1
  A4        A4        A3        A3
  A6        A6        A5        A5
  A8        A8        A7        A7
  ..        ..        ..        ..
We have 
Capacity = (num of disks/num of chunks) * capacity of single disk
Capacity = (4/4) * capacity of single disk = capacity of single disk and of course we can lost not only 2 but all 3 disk and not lose any info

May be far-map must be

Disk 1    Disk 2    Disk 3    Disk 4
------    ------    ------    ------
  A1        A2        A3        A4
  A5        A6        A7        A8
  ..        ..        ..        ..
  ..        A1        A2        A3
  A4        A5        A6        A7
  A8        ..        ..        ..

Then lost any folowing 2 drive - lost some info
Ex: Disk2+Disk3 -&gt; lost A2 and A6
Disk1+Disk3 -&gt; all data OK

PS Last map is 4/5 of 1 disk cap

Regards
Donserg</description>
		<content:encoded><![CDATA[<p>In this map<br />
Disk 1    Disk 2    Disk 3    Disk 4<br />
&#8212;&#8212;    &#8212;&#8212;    &#8212;&#8212;    &#8212;&#8212;<br />
  A1        A1        A2        A2<br />
  A3        A3        A4        A4<br />
  A5        A5        A6        A6<br />
  A7        A7        A8        A8<br />
  ..        ..        ..        ..<br />
  A2        A2        A1        A1<br />
  A4        A4        A3        A3<br />
  A6        A6        A5        A5<br />
  A8        A8        A7        A7<br />
  ..        ..        ..        ..<br />
We have<br />
Capacity = (num of disks/num of chunks) * capacity of single disk<br />
Capacity = (4/4) * capacity of single disk = capacity of single disk and of course we can lost not only 2 but all 3 disk and not lose any info</p>
<p>May be far-map must be</p>
<p>Disk 1    Disk 2    Disk 3    Disk 4<br />
&#8212;&#8212;    &#8212;&#8212;    &#8212;&#8212;    &#8212;&#8212;<br />
  A1        A2        A3        A4<br />
  A5        A6        A7        A8<br />
  ..        ..        ..        ..<br />
  ..        A1        A2        A3<br />
  A4        A5        A6        A7<br />
  A8        ..        ..        ..</p>
<p>Then lost any folowing 2 drive &#8211; lost some info<br />
Ex: Disk2+Disk3 -&gt; lost A2 and A6<br />
Disk1+Disk3 -&gt; all data OK</p>
<p>PS Last map is 4/5 of 1 disk cap</p>
<p>Regards<br />
Donserg</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: sam brown</title>
		<link>http://www.linux-mag.com/id/7928/#comment-207759</link>
		<dc:creator>sam brown</dc:creator>
		<pubDate>Thu, 03 May 2012 11:34:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-207759</guid>
		<description>Nested raid-0 by partitioning SSD multiple times will increase the number of i/o&#039;s, allowing dispatch from multiple cpu&#039;s and controllers - resulting in a higher queue depth - since benchmarks show higher QD results in best performance the idea would be like an engine to reach peak power (QD) and stay there during the entire process. One might say you don&#039;t have any way to sustain that always? but there is the idea of read ahead which can be as simple as read next block if last 2 blocks are sequential as long as the queue depth is decreasing?

8 drives, 2 controllers, 2 sockets, 4 cores each socket - (or 8 controllers?) would allow 8 cpu execution threads on windows ? assuming you stripe the two ssd raids physical raid-0 into 4 logical drives per physical raid then strip them using windows software raid all back into one?

like a short stroke without the stroke - if an ssd can handle multiple reads at once faster than single - then creating logical volumes of smaller blocks would create artificial higher queue depth than a hard drive which mostly cannot read from two places at once (untrue there is a drive with more than one actuator). nested raid could use variable strip size and a smart i/o drive to create logical volumes of 4,16,64,128 and attempt to send &quot;like&quot; requests and use a simple QOS algorithm like a router.</description>
		<content:encoded><![CDATA[<p>Nested raid-0 by partitioning SSD multiple times will increase the number of i/o&#8217;s, allowing dispatch from multiple cpu&#8217;s and controllers &#8211; resulting in a higher queue depth &#8211; since benchmarks show higher QD results in best performance the idea would be like an engine to reach peak power (QD) and stay there during the entire process. One might say you don&#8217;t have any way to sustain that always? but there is the idea of read ahead which can be as simple as read next block if last 2 blocks are sequential as long as the queue depth is decreasing?</p>
<p>8 drives, 2 controllers, 2 sockets, 4 cores each socket &#8211; (or 8 controllers?) would allow 8 cpu execution threads on windows ? assuming you stripe the two ssd raids physical raid-0 into 4 logical drives per physical raid then strip them using windows software raid all back into one?</p>
<p>like a short stroke without the stroke &#8211; if an ssd can handle multiple reads at once faster than single &#8211; then creating logical volumes of smaller blocks would create artificial higher queue depth than a hard drive which mostly cannot read from two places at once (untrue there is a drive with more than one actuator). nested raid could use variable strip size and a smart i/o drive to create logical volumes of 4,16,64,128 and attempt to send &#8220;like&#8221; requests and use a simple QOS algorithm like a router.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Luke</title>
		<link>http://www.linux-mag.com/id/7928/#comment-198543</link>
		<dc:creator>Luke</dc:creator>
		<pubDate>Tue, 24 Apr 2012 10:03:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-198543</guid>
		<description>Ongoing RAID10 nightmare.

http://www.dslreports.com/front/shutdown.html

https://docs.google.com/document/d/1kll86bDn_MgWoo6Ja7oHo_yvI0SCqggEvNWwPWIcrHY/edit</description>
		<content:encoded><![CDATA[<p>Ongoing RAID10 nightmare.</p>
<p><a href="http://www.dslreports.com/front/shutdown.html" rel="nofollow">http://www.dslreports.com/front/shutdown.html</a></p>
<p><a href="https://docs.google.com/document/d/1kll86bDn_MgWoo6Ja7oHo_yvI0SCqggEvNWwPWIcrHY/edit" rel="nofollow">https://docs.google.com/document/d/1kll86bDn_MgWoo6Ja7oHo_yvI0SCqggEvNWwPWIcrHY/edit</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: five mistakes</title>
		<link>http://www.linux-mag.com/id/7928/#comment-78545</link>
		<dc:creator>five mistakes</dc:creator>
		<pubDate>Sun, 11 Dec 2011 20:36:24 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-78545</guid>
		<description>Your blog is pretty cool to me and your subject matter is very relevant.  I was browsing around and came across something you might find interesting.  I was guilty of 3 of them with my sites.  &quot;99% of blog owners are guilty of these five BIG  errors&quot;.  http://bit.ly/uEMWS2 You will be suprised how easy they are to fix.</description>
		<content:encoded><![CDATA[<p>Your blog is pretty cool to me and your subject matter is very relevant.  I was browsing around and came across something you might find interesting.  I was guilty of 3 of them with my sites.  &#8220;99% of blog owners are guilty of these five BIG  errors&#8221;.  <a href="http://bit.ly/uEMWS2" rel="nofollow">http://bit.ly/uEMWS2</a> You will be suprised how easy they are to fix.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dog</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8822</link>
		<dc:creator>dog</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8822</guid>
		<description>&lt;p&gt;I think you&#039;re missing &quot;n&quot; in some of your total capacity calculations. e.g. &lt;/p&gt;
&lt;p&gt;Capacity = 3/5 * capacity of single disk&lt;/p&gt;
&lt;p&gt;should read:&lt;br /&gt;
Capacity = 3n/5 * capacity of single disk&lt;/p&gt;
&lt;p&gt;no?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>I think you&#8217;re missing &#8220;n&#8221; in some of your total capacity calculations. e.g. </p>
<p>Capacity = 3/5 * capacity of single disk</p>
<p>should read:<br />
Capacity = 3n/5 * capacity of single disk</p>
<p>no?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: howellan</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8823</link>
		<dc:creator>howellan</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8823</guid>
		<description>&lt;p&gt;I think there&#039;s something wrong.  In the example of a Linux raid configuration with m=2 and f=2, the diagram shows 4 copies of each piece of data.  If this is true, then the capacity would be &quot;(n/4) * capacity of a single disk.&quot;
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>I think there&#8217;s something wrong.  In the example of a Linux raid configuration with m=2 and f=2, the diagram shows 4 copies of each piece of data.  If this is true, then the capacity would be &#8220;(n/4) * capacity of a single disk.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: aldenrw</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8824</link>
		<dc:creator>aldenrw</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8824</guid>
		<description>&lt;p&gt;RAID 0+1 is a sign of incompetence. Look at the data layout for 0+1 and 1+0: it is identical. The only real difference between the two algorithms is that when a drive fails in a RAID 0+1, the RAID engine automatically shoots all the other drives on the same side of the mirror. That&#039;s just plain stupid. &lt;/p&gt;
&lt;p&gt;How it _should_ work is: if a user tries to create a RAID 0+1 volume, the RAID engine should make a RAID 1+0 volume instead.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>RAID 0+1 is a sign of incompetence. Look at the data layout for 0+1 and 1+0: it is identical. The only real difference between the two algorithms is that when a drive fails in a RAID 0+1, the RAID engine automatically shoots all the other drives on the same side of the mirror. That&#8217;s just plain stupid. </p>
<p>How it _should_ work is: if a user tries to create a RAID 0+1 volume, the RAID engine should make a RAID 1+0 volume instead.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: buggsy2</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8825</link>
		<dc:creator>buggsy2</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8825</guid>
		<description>&lt;p&gt;Wasn&#039;t RAID5 supposed to do what RAID10 does? That is, improve both speed and reliability. Hope you compare those in a future article.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Wasn&#8217;t RAID5 supposed to do what RAID10 does? That is, improve both speed and reliability. Hope you compare those in a future article.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: perfmonk</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8826</link>
		<dc:creator>perfmonk</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8826</guid>
		<description>&lt;p&gt;RAID 5  will do 4 IO for a write.&lt;br /&gt;
RAID 10 will do 2 IO for a write.&lt;/p&gt;
&lt;p&gt;RAID 10 cost more in disk space since, space efficiency is 1/n&lt;br /&gt;
RAID 5  is cheaper in disk space, space efficiency is 1-1/n&lt;/p&gt;
&lt;p&gt;Performance wise, the RAID 10 is the better choice.&lt;br /&gt;
But very often money is the first criteria of the management ...&lt;/p&gt;
&lt;p&gt;Wikipedia has an excellent explanation on subtility between RAID types.&lt;br /&gt;
See http://en.wikipedia.org/wiki/RAID
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>RAID 5  will do 4 IO for a write.<br />
RAID 10 will do 2 IO for a write.</p>
<p>RAID 10 cost more in disk space since, space efficiency is 1/n<br />
RAID 5  is cheaper in disk space, space efficiency is 1-1/n</p>
<p>Performance wise, the RAID 10 is the better choice.<br />
But very often money is the first criteria of the management &#8230;</p>
<p>Wikipedia has an excellent explanation on subtility between RAID types.<br />
See <a href="http://en.wikipedia.org/wiki/RAID" rel="nofollow">http://en.wikipedia.org/wiki/RAID</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: davidbrown</title>
		<link>http://www.linux-mag.com/id/7928/#comment-8827</link>
		<dc:creator>davidbrown</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7928/#comment-8827</guid>
		<description>&lt;p&gt;The capacity and redundancy calculations seem to be very mixed-up here.&lt;/p&gt;
&lt;p&gt;For simple RAID-10, the article is correct.  The capacity of RAID-01 (which no one would ever use) and RAID-10 are both n/2, and can tolerate the loss of any one disk.  If the array is made from more than 1 set of RAID-1 pairs, you can loose 1 disk from each pair - but not two disks from the same pair.&lt;/p&gt;
&lt;p&gt;But for mdadm RAID10, the article is mostly wrong.  It starts off by missing out the &quot;offset&quot; option as an alternative to &quot;near&quot; and &quot;far&quot; - though much less used than &quot;near&quot; or &quot;far&quot;, it is possibly slightly faster for some loads.&lt;/p&gt;
&lt;p&gt;The efficiency of single-mirror RAID10 is always 50%, or n/2 - regardless of the number of disks.  Think about it - each block of data is written to two disks.  So if you have three disks, the capacity is 3/2 - not 2/3 (or 2n/3).  Regardless of the number of disks, the RAID10 will tolerate the loss of any one disk.  If you have enough disks (4 or more), it /may/ tolerate the loss of other disks, depending on the spread of the data.&lt;/p&gt;
&lt;p&gt;With four-way mirroring (near-2 and far-2), the capacity is n/4 (not n/2), and it will tolerate the loss of any /3/ disks (assuming you have at least four disks).  This is the point of having multiple-copy mirroring - for each mirror, it costs you space for a duplication of the data, but you get extra redundancy.&lt;/p&gt;
&lt;p&gt;It is also possible to use three-way mirroring (such as far-3), for n/3 capacity and two disk redundancy.&lt;/p&gt;
&lt;p&gt;The whole last section of the article (from &quot;Linux is special&quot;) needs a re-write, using correct information about sizes, redundancy, and speeds.  The maths is so simple for RAID-10, regardless of the number of disks and the types of mirrors, that it is hard to understand how the author got this so muddled.  It is good practice to read the relevant wikipedia article before writing your own article!
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>The capacity and redundancy calculations seem to be very mixed-up here.</p>
<p>For simple RAID-10, the article is correct.  The capacity of RAID-01 (which no one would ever use) and RAID-10 are both n/2, and can tolerate the loss of any one disk.  If the array is made from more than 1 set of RAID-1 pairs, you can loose 1 disk from each pair &#8211; but not two disks from the same pair.</p>
<p>But for mdadm RAID10, the article is mostly wrong.  It starts off by missing out the &#8220;offset&#8221; option as an alternative to &#8220;near&#8221; and &#8220;far&#8221; &#8211; though much less used than &#8220;near&#8221; or &#8220;far&#8221;, it is possibly slightly faster for some loads.</p>
<p>The efficiency of single-mirror RAID10 is always 50%, or n/2 &#8211; regardless of the number of disks.  Think about it &#8211; each block of data is written to two disks.  So if you have three disks, the capacity is 3/2 &#8211; not 2/3 (or 2n/3).  Regardless of the number of disks, the RAID10 will tolerate the loss of any one disk.  If you have enough disks (4 or more), it /may/ tolerate the loss of other disks, depending on the spread of the data.</p>
<p>With four-way mirroring (near-2 and far-2), the capacity is n/4 (not n/2), and it will tolerate the loss of any /3/ disks (assuming you have at least four disks).  This is the point of having multiple-copy mirroring &#8211; for each mirror, it costs you space for a duplication of the data, but you get extra redundancy.</p>
<p>It is also possible to use three-way mirroring (such as far-3), for n/3 capacity and two disk redundancy.</p>
<p>The whole last section of the article (from &#8220;Linux is special&#8221;) needs a re-write, using correct information about sizes, redundancy, and speeds.  The maths is so simple for RAID-10, regardless of the number of disks and the types of mirrors, that it is hard to understand how the author got this so muddled.  It is good practice to read the relevant wikipedia article before writing your own article!</p>
]]></content:encoded>
	</item>
</channel>
</rss>