<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Introduction to RAID</title>
	<atom:link href="http://www.linux-mag.com/id/7924/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7924/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: abc</title>
		<link>http://www.linux-mag.com/id/7924/#comment-290703</link>
		<dc:creator>abc</dc:creator>
		<pubDate>Thu, 26 Jul 2012 03:58:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-290703</guid>
		<description>good</description>
		<content:encoded><![CDATA[<p>good</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Gregory</title>
		<link>http://www.linux-mag.com/id/7924/#comment-15285</link>
		<dc:creator>Gregory</dc:creator>
		<pubDate>Mon, 31 Oct 2011 06:00:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-15285</guid>
		<description>&lt;i&gt;&quot;In the real-world, RAID-4 is rarely used because RAID-5 (see next sub-section) has replaced it.&quot;&lt;/i&gt;
Note that Netapp storage, which is most popular today, uses Raid 4 and Raid dp only. Raid dp is kind of raid 4 + one more parity disk</description>
		<content:encoded><![CDATA[<p><i>&#8220;In the real-world, RAID-4 is rarely used because RAID-5 (see next sub-section) has replaced it.&#8221;</i><br />
Note that Netapp storage, which is most popular today, uses Raid 4 and Raid dp only. Raid dp is kind of raid 4 + one more parity disk</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: linux.haresh</title>
		<link>http://www.linux-mag.com/id/7924/#comment-9494</link>
		<dc:creator>linux.haresh</dc:creator>
		<pubDate>Thu, 05 May 2011 10:15:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-9494</guid>
		<description>Nice article to clear idea of Raid to the newcomers.</description>
		<content:encoded><![CDATA[<p>Nice article to clear idea of Raid to the newcomers.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: raji ravi</title>
		<link>http://www.linux-mag.com/id/7924/#comment-9315</link>
		<dc:creator>raji ravi</dc:creator>
		<pubDate>Wed, 30 Mar 2011 21:45:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-9315</guid>
		<description>Good and detailed article</description>
		<content:encoded><![CDATA[<p>Good and detailed article</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: karimbardee</title>
		<link>http://www.linux-mag.com/id/7924/#comment-9103</link>
		<dc:creator>karimbardee</dc:creator>
		<pubDate>Tue, 08 Mar 2011 01:53:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-9103</guid>
		<description>Nice article for someone who does not know anything about RAID (like me ) and want to know the basic definition or the general idea.
thanks.</description>
		<content:encoded><![CDATA[<p>Nice article for someone who does not know anything about RAID (like me ) and want to know the basic definition or the general idea.<br />
thanks.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: aotto</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8812</link>
		<dc:creator>aotto</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8812</guid>
		<description>&lt;p&gt;You have typos. Consider:&lt;/p&gt;
&lt;p&gt;$article_text =~ s/RIAD/RAID/g;&lt;/p&gt;
&lt;p&gt;Also, you did not explain how parity works, which is something that confuses RAID newcomers. They need to fathom the concept that the parity is combined with the surrounding data to compute what the original data was so that it can be recreated.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>You have typos. Consider:</p>
<p>$article_text =~ s/RIAD/RAID/g;</p>
<p>Also, you did not explain how parity works, which is something that confuses RAID newcomers. They need to fathom the concept that the parity is combined with the surrounding data to compute what the original data was so that it can be recreated.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: markhahn</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8813</link>
		<dc:creator>markhahn</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8813</guid>
		<description>&lt;p&gt;it&#039;s worth noting that MD provides non-nested raid10: it&#039;s a single raid level that merely provides replicas of blocks (on multiple disks, of course.)  with 2 disks, it&#039;s the same as raid1, but can still provide raid1-level redundancy with 3 or more disks.  more disks give you a raid0-like increase in bandwidth and/or throughput.  it also lets you choose replication of more than 2x.&lt;/p&gt;
&lt;p&gt;but in general, I think people are gradually realizing that block-level raid is eventually going to become obsolete.  there are a lot of advantages to letting a smart filesystem manage redundancy, since that permits file/access-aware choices, and can mitigate some of the issues of block-level raid rebuilds.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>it&#8217;s worth noting that MD provides non-nested raid10: it&#8217;s a single raid level that merely provides replicas of blocks (on multiple disks, of course.)  with 2 disks, it&#8217;s the same as raid1, but can still provide raid1-level redundancy with 3 or more disks.  more disks give you a raid0-like increase in bandwidth and/or throughput.  it also lets you choose replication of more than 2x.</p>
<p>but in general, I think people are gradually realizing that block-level raid is eventually going to become obsolete.  there are a lot of advantages to letting a smart filesystem manage redundancy, since that permits file/access-aware choices, and can mitigate some of the issues of block-level raid rebuilds.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: aslamnet</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8814</link>
		<dc:creator>aslamnet</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8814</guid>
		<description>&lt;p&gt;Thanks for such a structured article on RAID..
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks for such a structured article on RAID..</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: roustabout</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8815</link>
		<dc:creator>roustabout</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8815</guid>
		<description>&lt;p&gt;RAID3 and RAID4 are considered to be &quot;two of the most common RAID levels&quot; by whom, exactly?  &lt;/p&gt;
&lt;p&gt;And not a peep about how any of these relate to linux in an article in something calling itself linuxmag?  &lt;/p&gt;
&lt;p&gt;Not even a mention of mdraid?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>RAID3 and RAID4 are considered to be &#8220;two of the most common RAID levels&#8221; by whom, exactly?  </p>
<p>And not a peep about how any of these relate to linux in an article in something calling itself linuxmag?  </p>
<p>Not even a mention of mdraid?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: seenutn</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8816</link>
		<dc:creator>seenutn</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8816</guid>
		<description>&lt;p&gt;Nice article on RAID, but it would be nice if it covers about where will the RAID controller exist (in BIOS or Kernel or separate controller).  And also, where will the RAID controller store the meta data?
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Nice article on RAID, but it would be nice if it covers about where will the RAID controller exist (in BIOS or Kernel or separate controller).  And also, where will the RAID controller store the meta data?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8817</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8817</guid>
		<description>&lt;p&gt;Thanks everyone for the comments. Just to clarify a bit:&lt;/p&gt;
&lt;p&gt;@roustabout: RAID-3 and RAID-4 were part of the original RAID definition. I didn&#039;t see what I called them &quot;two of the most common RAID levels&quot;. If I did, the intent is to point out that they are part of the original RAID definitions, but commonly _used_.&lt;/p&gt;
&lt;p&gt;For everyone who is concerned that I haven&#039;t talked about mdadm or software RAID, hardware RAID, or &quot;fakeRAID&quot; - that article is coming (as are articles about Nested RAID). This is a whole series of introductory articles on RAID. Talking about specific implementations, particularly for Linux, is coming. You just have to be patient. So @roustabout - you will just have to be patient :)&lt;/p&gt;
&lt;p&gt;@markhahn - great comment and I totally agree with you but I also disagree to some extent. Putting RAID functionality into the file system _should_ allow the file system to do really useful things such as only recover the needed blocks during a disk failure. This avoids having to read all of the blocks for recovery and perhaps coming close to the dreaded URE limit.&lt;/p&gt;
&lt;p&gt;But this means that we (the community) needs to rewrite all file systems to do this. With each file system being unique this means we are going to have different sets of code that do pretty much the same thing. I don&#039;t think existing file systems will do this (too much work and too disruptive) so that means future file systems should incorporate this (such as btrfs). However, it takes a very long time for a file system to mature so we may be waiting for several years. So in the meantime, I think block-based RAID is here to stay.&lt;/p&gt;
&lt;p&gt;On the other hand, I think the development of object based file systems that don&#039;t use block-based RAID, should be the wave of the future. PanFS from Panasas is an example of this. I think local file systems should adopt this approach (and we&#039;re seeing some of this with ExoFS) because we don&#039;t need to read all the blocks to recover from a disk failure - just the objects that are &quot;missing&quot; or need to be duplicated.&lt;/p&gt;
&lt;p&gt;Thanks for bringing up the topic - always good to think about what we need to do next few years.&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks everyone for the comments. Just to clarify a bit:</p>
<p>@roustabout: RAID-3 and RAID-4 were part of the original RAID definition. I didn&#8217;t see what I called them &#8220;two of the most common RAID levels&#8221;. If I did, the intent is to point out that they are part of the original RAID definitions, but commonly _used_.</p>
<p>For everyone who is concerned that I haven&#8217;t talked about mdadm or software RAID, hardware RAID, or &#8220;fakeRAID&#8221; &#8211; that article is coming (as are articles about Nested RAID). This is a whole series of introductory articles on RAID. Talking about specific implementations, particularly for Linux, is coming. You just have to be patient. So @roustabout &#8211; you will just have to be patient :)</p>
<p>@markhahn &#8211; great comment and I totally agree with you but I also disagree to some extent. Putting RAID functionality into the file system _should_ allow the file system to do really useful things such as only recover the needed blocks during a disk failure. This avoids having to read all of the blocks for recovery and perhaps coming close to the dreaded URE limit.</p>
<p>But this means that we (the community) needs to rewrite all file systems to do this. With each file system being unique this means we are going to have different sets of code that do pretty much the same thing. I don&#8217;t think existing file systems will do this (too much work and too disruptive) so that means future file systems should incorporate this (such as btrfs). However, it takes a very long time for a file system to mature so we may be waiting for several years. So in the meantime, I think block-based RAID is here to stay.</p>
<p>On the other hand, I think the development of object based file systems that don&#8217;t use block-based RAID, should be the wave of the future. PanFS from Panasas is an example of this. I think local file systems should adopt this approach (and we&#8217;re seeing some of this with ExoFS) because we don&#8217;t need to read all the blocks to recover from a disk failure &#8211; just the objects that are &#8220;missing&#8221; or need to be duplicated.</p>
<p>Thanks for bringing up the topic &#8211; always good to think about what we need to do next few years.</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: buggsy2</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8818</link>
		<dc:creator>buggsy2</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8818</guid>
		<description>&lt;p&gt;A great beginning article on RAID and I look forward to more on this topic.  You eluded to current disk drives doing their own parity checking/correction, I&#039;d like to see that explored more; just how much onboard data checking do they do? I&#039;ve heard that modern high-density drives generate huge numbers of errors from the raw disk which must be corrected in the onboard electronics of the drive, but I&#039;ve never seen anything definitive about this topic.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>A great beginning article on RAID and I look forward to more on this topic.  You eluded to current disk drives doing their own parity checking/correction, I&#8217;d like to see that explored more; just how much onboard data checking do they do? I&#8217;ve heard that modern high-density drives generate huge numbers of errors from the raw disk which must be corrected in the onboard electronics of the drive, but I&#8217;ve never seen anything definitive about this topic.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: casiquey2k</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8819</link>
		<dc:creator>casiquey2k</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8819</guid>
		<description>&lt;p&gt;I just wanted to make an observation, you start the third page by saying &quot;In this layout, data is written in block stripes to the first three disks (disks 0, 1, and 2) while the third drive (disk 3)&quot; and I think what you meant to say is &quot;...while the fourth drive (disk 3)&quot; since your array starts at disk 0.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>I just wanted to make an observation, you start the third page by saying &#8220;In this layout, data is written in block stripes to the first three disks (disks 0, 1, and 2) while the third drive (disk 3)&#8221; and I think what you meant to say is &#8220;&#8230;while the fourth drive (disk 3)&#8221; since your array starts at disk 0.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: casiquey2k</title>
		<link>http://www.linux-mag.com/id/7924/#comment-8820</link>
		<dc:creator>casiquey2k</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7924/#comment-8820</guid>
		<description>&lt;br /&gt;
</description>
		<content:encoded><![CDATA[<p></p>
]]></content:encoded>
	</item>
</channel>
</rss>