<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Churning Butter(FS): An Interview with Chris Mason</title>
	<atom:link href="http://www.linux-mag.com/id/7329/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7329/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: izzat</title>
		<link>http://www.linux-mag.com/id/7329/#comment-280401</link>
		<dc:creator>izzat</dc:creator>
		<pubDate>Fri, 13 Jul 2012 21:33:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-280401</guid>
		<description>apple flash+603 2179 6838</description>
		<content:encoded><![CDATA[<p>apple flash+603 2179 6838</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: izzat</title>
		<link>http://www.linux-mag.com/id/7329/#comment-280399</link>
		<dc:creator>izzat</dc:creator>
		<pubDate>Fri, 13 Jul 2012 21:32:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-280399</guid>
		<description>apple flash</description>
		<content:encoded><![CDATA[<p>apple flash</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ttsiodras</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6434</link>
		<dc:creator>ttsiodras</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6434</guid>
		<description>I appreciate the efforts involved in creating BTRFS - Linux *needs* a copy-on-write FS.&lt;br /&gt;
&lt;br /&gt;
In the company I work for, we are already using ZFS (via OpenSolaris) to create practically unlimited daily backups of our virtual machines (huge VMWARE server .vmdk files that only differ daily in less than 1% of their data - only a copy-on-write fs could handle this well). The only problem I see with BTRFS is that it will take quite some time before we will be able to trust it as much as ZFS... Filesystems need a lot of time to iron out obscure race-conditions and rare usage patterns...  I hope BTRFS will catch up quickly...  and I feel good about Oracle owning both of them - no danger of patent wars on BTRFS!</description>
		<content:encoded><![CDATA[<p>I appreciate the efforts involved in creating BTRFS &#8211; Linux *needs* a copy-on-write FS.</p>
<p>In the company I work for, we are already using ZFS (via OpenSolaris) to create practically unlimited daily backups of our virtual machines (huge VMWARE server .vmdk files that only differ daily in less than 1% of their data &#8211; only a copy-on-write fs could handle this well). The only problem I see with BTRFS is that it will take quite some time before we will be able to trust it as much as ZFS&#8230; Filesystems need a lot of time to iron out obscure race-conditions and rare usage patterns&#8230;  I hope BTRFS will catch up quickly&#8230;  and I feel good about Oracle owning both of them &#8211; no danger of patent wars on BTRFS!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dog</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6435</link>
		<dc:creator>dog</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6435</guid>
		<description>how long did you wait to trust ZFS? Its not that old. And Oracle owns btrfs?</description>
		<content:encoded><![CDATA[<p>how long did you wait to trust ZFS? Its not that old. And Oracle owns btrfs?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: neondiet</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6436</link>
		<dc:creator>neondiet</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6436</guid>
		<description>Is it possible with a raid10 filesystem to control which devices contain which halves of the mirror?   I&#039;ll give you an example of why I&#039;m asking.  I&#039;ve previously build raid10 volumes on HP-UX.  HP&#039;s implementation of LVM includes a feature called Physical Volume Groups.  Disks can be included in an LVM VG and then bunched into PVGs so that when creating a mirrored LV the mirror is split across PVGs.  On systems with dual raid cards this has allowed me to put all the disks attached to one raid card in one PVG and disks attached to the other in a second PVG.  The result is that I/O to a mirrored LV gets split evenly between both raid cards, doubling up the available bandwidth.  In addition, it protects my LVs from a complete single raid controller failure.   To achieve the same result on Linux today I must first use md to mirror devices across raid cards before adding the md devices to an LVM VG.  Then I create LVs as normal and leave the mirroring down to md to sort out.  It&#039;s a perfectly workable solution, but given that the ultimate goal of btrfs is to make md redundant, will we be able to achieve the same result in btrfs somehow?   The wiki (linked in the article) doesn&#039;t hint at this.  Thanks.</description>
		<content:encoded><![CDATA[<p>Is it possible with a raid10 filesystem to control which devices contain which halves of the mirror?   I&#8217;ll give you an example of why I&#8217;m asking.  I&#8217;ve previously build raid10 volumes on HP-UX.  HP&#8217;s implementation of LVM includes a feature called Physical Volume Groups.  Disks can be included in an LVM VG and then bunched into PVGs so that when creating a mirrored LV the mirror is split across PVGs.  On systems with dual raid cards this has allowed me to put all the disks attached to one raid card in one PVG and disks attached to the other in a second PVG.  The result is that I/O to a mirrored LV gets split evenly between both raid cards, doubling up the available bandwidth.  In addition, it protects my LVs from a complete single raid controller failure.   To achieve the same result on Linux today I must first use md to mirror devices across raid cards before adding the md devices to an LVM VG.  Then I create LVs as normal and leave the mirroring down to md to sort out.  It&#8217;s a perfectly workable solution, but given that the ultimate goal of btrfs is to make md redundant, will we be able to achieve the same result in btrfs somehow?   The wiki (linked in the article) doesn&#8217;t hint at this.  Thanks.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: bugmenot</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6437</link>
		<dc:creator>bugmenot</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6437</guid>
		<description>ZFS has been in Solaris for almost 3 years now, and it was in OpenSolaris before that.</description>
		<content:encoded><![CDATA[<p>ZFS has been in Solaris for almost 3 years now, and it was in OpenSolaris before that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: liotier</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6438</link>
		<dc:creator>liotier</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6438</guid>
		<description>&lt;blockquote cite=&quot;Article&quot;&gt;Devices can be mixed in size and speed, and over the long term Btrfs will do the right thing to optimize access&lt;/blockquote&gt;&lt;br /&gt;
&lt;br /&gt;
You mean that the user can throw a motley mix of whatever he has, including devices of wildly different performance profiles such as hard disks and SSD, and that Btrfs will allocate data to the right device according to file size, block size and whatever other parameters may be relevant to it ? Would that be a sort of integrated hierarchical file system with data moving according to usage patterns ? Or simpler heuristics such as storing small files on low latency / low throughput devices such as SSD and large files on high latency / high throughput devices such as hard disks ? I find the &quot;do the right thing&quot; quote intriguing.</description>
		<content:encoded><![CDATA[<blockquote cite="Article"><p>Devices can be mixed in size and speed, and over the long term Btrfs will do the right thing to optimize access</p></blockquote>
<p>You mean that the user can throw a motley mix of whatever he has, including devices of wildly different performance profiles such as hard disks and SSD, and that Btrfs will allocate data to the right device according to file size, block size and whatever other parameters may be relevant to it ? Would that be a sort of integrated hierarchical file system with data moving according to usage patterns ? Or simpler heuristics such as storing small files on low latency / low throughput devices such as SSD and large files on high latency / high throughput devices such as hard disks ? I find the &#8220;do the right thing&#8221; quote intriguing.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: brmiller0423</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6439</link>
		<dc:creator>brmiller0423</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6439</guid>
		<description>I have been trying to educate myself about RAID but have never actually set up a RAID configuration.&lt;br /&gt;
&lt;br /&gt;
When you speak of RAID 10, are you referring to &quot;traditional&quot; RAID 1+0, or to Linux MD RAID 10? Wikipedia offers a useful explanation: &lt;a href=&quot;http://en.wikipedia.org/wiki/Non-standard_RAID_levels&quot; rel=&quot;nofollow&quot;&gt;. The kernel.org wiki which you refer to includes the sentence that &quot;Raid10 requires at least 4 devices.&quot; which implies that you mean RAID 1+0.&lt;br /&gt;
&lt;br /&gt;
Would it not be more beneficial to the community to implement Linux MD RAID 10 before implementing RAID 5 and 6?</description>
		<content:encoded><![CDATA[<p>I have been trying to educate myself about RAID but have never actually set up a RAID configuration.</p>
<p>When you speak of RAID 10, are you referring to &#8220;traditional&#8221; RAID 1+0, or to Linux MD RAID 10? Wikipedia offers a useful explanation: <a href="http://en.wikipedia.org/wiki/Non-standard_RAID_levels" rel="nofollow">. The kernel.org wiki which you refer to includes the sentence that &#8220;Raid10 requires at least 4 devices.&#8221; which implies that you mean RAID 1+0.</p>
<p>Would it not be more beneficial to the community to implement Linux MD RAID 10 before implementing RAID 5 and 6?</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: brmiller0423</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6440</link>
		<dc:creator>brmiller0423</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6440</guid>
		<description>Sorry for snafu in previous post. Here&#039;s wishing that the site administrator implements a preview and/or a delete by author function.&lt;br /&gt;
&lt;br /&gt;
I have been trying to educate myself about RAID but have never actually set up a RAID configuration.&lt;br /&gt;
&lt;br /&gt;
When you speak of RAID 10, are you referring to â€œtraditionalâ€ RAID 1+0, or to Linux MD RAID 10? Wikipedia offers a useful explanation: &lt;a href=&quot;http://en.wikipedia.org/wiki/Non-standard_RAID_levels&quot; title=&quot;Non-standard RAID levels&quot; rel=&quot;nofollow&quot;&gt;. The kernel.org wiki which you refer to includes the sentence that â€œRaid10 requires at least 4 devices.â€ which implies that you mean RAID 1+0.&lt;br /&gt;
&lt;br /&gt;
Would it not be more beneficial to the community to implement Linux MD RAID 10 before implementing RAID 5 and 6?</description>
		<content:encoded><![CDATA[<p>Sorry for snafu in previous post. Here&#8217;s wishing that the site administrator implements a preview and/or a delete by author function.</p>
<p>I have been trying to educate myself about RAID but have never actually set up a RAID configuration.</p>
<p>When you speak of RAID 10, are you referring to â€œtraditionalâ€ RAID 1+0, or to Linux MD RAID 10? Wikipedia offers a useful explanation: <a href="http://en.wikipedia.org/wiki/Non-standard_RAID_levels" title="Non-standard RAID levels" rel="nofollow">. The kernel.org wiki which you refer to includes the sentence that â€œRaid10 requires at least 4 devices.â€ which implies that you mean RAID 1+0.</p>
<p>Would it not be more beneficial to the community to implement Linux MD RAID 10 before implementing RAID 5 and 6?</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6441</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6441</guid>
		<description>From what I&#039;ve been reading, ZFS development began almost 6-8 years ago. As ttsiodras pointed out, it takes a long time for people to trust data to new file systems. This is also true for ZFS. It&#039;s taken 6-8 years to get people to trust ZFS enough to start using it. &lt;br /&gt;
&lt;br /&gt;
I think the same will be true for btrfs. It&#039;s only been in development 1-2 years so it will take some more time before it becomes accepted for critical data. I&#039;m hopeful, however, because not only do we have Oracle behind it but also a great deal of the Linux community including many of the &quot;heavy hitters&quot;. I may even go out on a limb and say that within 2 years btrfs will become much more accepted on Linux systems (but I will say that my track record on bets such as these isn&#039;t the best).&lt;br /&gt;
&lt;br /&gt;
But overall it&#039;s not a race.</description>
		<content:encoded><![CDATA[<p>From what I&#8217;ve been reading, ZFS development began almost 6-8 years ago. As ttsiodras pointed out, it takes a long time for people to trust data to new file systems. This is also true for ZFS. It&#8217;s taken 6-8 years to get people to trust ZFS enough to start using it. </p>
<p>I think the same will be true for btrfs. It&#8217;s only been in development 1-2 years so it will take some more time before it becomes accepted for critical data. I&#8217;m hopeful, however, because not only do we have Oracle behind it but also a great deal of the Linux community including many of the &#8220;heavy hitters&#8221;. I may even go out on a limb and say that within 2 years btrfs will become much more accepted on Linux systems (but I will say that my track record on bets such as these isn&#8217;t the best).</p>
<p>But overall it&#8217;s not a race.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7329/#comment-6442</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7329/#comment-6442</guid>
		<description>I&#039;m not sure to be honest. I think you can do this by using LVM first to create the two PVG&#039;s and then build btrfs on top of that.&lt;br /&gt;
&lt;br /&gt;
This might be a good question for the btrfs mailing list. In fact it&#039;s early enough that you could influence features. :)</description>
		<content:encoded><![CDATA[<p>I&#8217;m not sure to be honest. I think you can do this by using LVM first to create the two PVG&#8217;s and then build btrfs on top of that.</p>
<p>This might be a good question for the btrfs mailing list. In fact it&#8217;s early enough that you could influence features. :)</p>
]]></content:encoded>
	</item>
</channel>
</rss>