<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Metadata Performance of Four Linux File Systems</title>
	<atom:link href="http://www.linux-mag.com/id/7497/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7497/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: alphalipidsd2</title>
		<link>http://www.linux-mag.com/id/7497/#comment-860227</link>
		<dc:creator>alphalipidsd2</dc:creator>
		<pubDate>Wed, 10 Apr 2013 03:33:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-860227</guid>
		<description>You understand thus significantly relating to this matter, made me personally consider it from numerous numerous angles. Its like men and women are not interested until it is one thing to do with Girl gaga! Your personal stuffs outstanding. At all times take care of it up!</description>
		<content:encoded><![CDATA[<p>You understand thus significantly relating to this matter, made me personally consider it from numerous numerous angles. Its like men and women are not interested until it is one thing to do with Girl gaga! Your personal stuffs outstanding. At all times take care of it up!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: typhoidmary</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6921</link>
		<dc:creator>typhoidmary</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6921</guid>
		<description>&lt;p&gt;I thought this was interesting and useful. I would like to know how much Hardware makes a difference. SATA vs. PATA vs. SCSI. Do different chipsets perform differently? And of course different drives with different cache sizes, RPMs etc...
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>I thought this was interesting and useful. I would like to know how much Hardware makes a difference. SATA vs. PATA vs. SCSI. Do different chipsets perform differently? And of course different drives with different cache sizes, RPMs etc&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mbainter</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6922</link>
		<dc:creator>mbainter</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6922</guid>
		<description>&lt;p&gt;Definitely interesting, though I would\&#039;ve liked to have seen xfs and the reiser3/reiser4 filesystems compared as well.  There are some significant differences there that are worth considering.  &lt;/p&gt;
&lt;p&gt;You should also include large files.  Particularly with the advent of media-servers and the like being able to perform efficiently with large files is important, and that\&#039;s not covered here.&lt;/p&gt;
&lt;p&gt;Last but not least I\&#039;d like to see some comparison of storage efficiency for these different types of files.  If you can move them fast, that\&#039;s great, but if you can\&#039;t store a particular type of file efficiently and I\&#039;m going to lose say 20% of my storage because of it that\&#039;s important to consider when making the choice.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Definitely interesting, though I would\&#8217;ve liked to have seen xfs and the reiser3/reiser4 filesystems compared as well.  There are some significant differences there that are worth considering.  </p>
<p>You should also include large files.  Particularly with the advent of media-servers and the like being able to perform efficiently with large files is important, and that\&#8217;s not covered here.</p>
<p>Last but not least I\&#8217;d like to see some comparison of storage efficiency for these different types of files.  If you can move them fast, that\&#8217;s great, but if you can\&#8217;t store a particular type of file efficiently and I\&#8217;m going to lose say 20% of my storage because of it that\&#8217;s important to consider when making the choice.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6923</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6923</guid>
		<description>&lt;p&gt;In general I agree with both your comments (typhoidmary and mbainter). But let me comment really quickly on the details.&lt;/p&gt;
&lt;p&gt;@typhoidmary:&lt;br /&gt;
I would love to test different chipsets and different drives. I just need the money to buy it :)&lt;/p&gt;
&lt;p&gt;BTW - thanks for all of your comments. I\&#039;ve noticed you read my articles and post comments. That\&#039;s always appreciated.&lt;/p&gt;
&lt;p&gt;@mbainter:&lt;br /&gt;
I wanted to test xfs and the reiser\&#039;s but I ran out of time and the article was getting a little long. I will try to do a follow-up at some point with those numbers (maybe next week). &lt;/p&gt;
&lt;p&gt;I also didn\&#039;t do large files (400 MiB+?) because of time, but I do want to do those runs.&lt;/p&gt;
&lt;p&gt;For both of you - thanks for the comments.&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>In general I agree with both your comments (typhoidmary and mbainter). But let me comment really quickly on the details.</p>
<p>@typhoidmary:<br />
I would love to test different chipsets and different drives. I just need the money to buy it :)</p>
<p>BTW &#8211; thanks for all of your comments. I\&#8217;ve noticed you read my articles and post comments. That\&#8217;s always appreciated.</p>
<p>@mbainter:<br />
I wanted to test xfs and the reiser\&#8217;s but I ran out of time and the article was getting a little long. I will try to do a follow-up at some point with those numbers (maybe next week). </p>
<p>I also didn\&#8217;t do large files (400 MiB+?) because of time, but I do want to do those runs.</p>
<p>For both of you &#8211; thanks for the comments.</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: chrisjoelly</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6924</link>
		<dc:creator>chrisjoelly</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6924</guid>
		<description>&lt;p&gt;Thanks for that comparison. &lt;/p&gt;
&lt;p&gt;Is it possible to include some other, not so often used filesystems as well? e.g. GFS or GFS2 with various storage systems below like DRBD? And tuning opportunities for filesystems in typical scenarios would be a great article too :-)&lt;/p&gt;
&lt;p&gt;Chris
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks for that comparison. </p>
<p>Is it possible to include some other, not so often used filesystems as well? e.g. GFS or GFS2 with various storage systems below like DRBD? And tuning opportunities for filesystems in typical scenarios would be a great article too :-)</p>
<p>Chris</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: mdavid</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6925</link>
		<dc:creator>mdavid</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6925</guid>
		<description>&lt;p&gt;hi Jeff&lt;br /&gt;
I have read the article carefully, and have also read the review article about 9 years of FS and Storage benchmarking.&lt;/p&gt;
&lt;p&gt;Let me make some remarks which I hope are constructive criticism.&lt;br /&gt;
I have downloaded the fdtree source.&lt;br /&gt;
First off, it\&#039;s a single thread&lt;br /&gt;
Your machine has 8 GB ram.&lt;/p&gt;
&lt;p&gt;In my opinion, and from experience (also done some FS benchmarking), testing with a total file size &lt;= 1.5 x amount of RAM can go to caches first. Your tests with small sizes amount to around 1.3-1.4 GB&lt;/p&gt;
&lt;p&gt;While the tests with 4MB size the total ranges between 12-16GB.&lt;/p&gt;
&lt;p&gt;I think the results of \&quot;creation\&quot; are not so bound by caches, while removal can be cached, and that\&#039;s why I think removal of files and dirs, have timings which are quite small to the extent of not being able to draw conclusions in certain cases.&lt;/p&gt;
&lt;p&gt;For metadata, and AFAIK, each file or dir has 4KB for the inode, (at least for the ext3), don\&#039;t know for the others. One could imagine testing \&quot;pure\&quot; metadata with a \&quot;touch\&quot; *nix command, instead of dd, and being careful to make a total of 12GB/4KB = 3 million files+dirs for example.&lt;/p&gt;
&lt;p&gt;Furthermore, you mention in the beginning why you are benchmarking metadata, but the fdtree misses completely one very important operation which is \&quot;stat\&quot; or a read of the inode, there are some  workloads where you write once and read many (even if it\&#039;s small files).&lt;/p&gt;
&lt;p&gt;this leads to some sugestions:&lt;br /&gt;
I have used recently bonnie++ 1.03e, which does also metadata benchmarking including the \&quot;stat\&quot; operation.&lt;br /&gt;
The iozone tarball includes an exec called fileop (though I never tried it).&lt;/p&gt;
&lt;p&gt;Though I imagine that you don\&#039;t have a long time to do the testing (as some of us can), just to give you an example, one of my last tests with the above bonnie++ version, each run could take between 2 to 4 hours depending on the filesystem, and I also run it 10 times.&lt;/p&gt;
&lt;p&gt;finally, if in your future tests you include the read/stat operation, just try out mount the FS with and without atime,diratime.&lt;/p&gt;
&lt;p&gt;OK, that\&#039;s it, sorry if I was too obvious in some things, or to strong, as I said I tried to be constructive, and continue your good work.&lt;/p&gt;
&lt;p&gt;I read your other article at a time when I was starting some benchmarks, and I stopped to read first&lt;/p&gt;
&lt;p&gt;regards&lt;/p&gt;
&lt;p&gt;Mario David
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>hi Jeff<br />
I have read the article carefully, and have also read the review article about 9 years of FS and Storage benchmarking.</p>
<p>Let me make some remarks which I hope are constructive criticism.<br />
I have downloaded the fdtree source.<br />
First off, it\&#8217;s a single thread<br />
Your machine has 8 GB ram.</p>
<p>In my opinion, and from experience (also done some FS benchmarking), testing with a total file size &lt;= 1.5 x amount of RAM can go to caches first. Your tests with small sizes amount to around 1.3-1.4 GB</p>
<p>While the tests with 4MB size the total ranges between 12-16GB.</p>
<p>I think the results of \&#8221;creation\&#8221; are not so bound by caches, while removal can be cached, and that\&#8217;s why I think removal of files and dirs, have timings which are quite small to the extent of not being able to draw conclusions in certain cases.</p>
<p>For metadata, and AFAIK, each file or dir has 4KB for the inode, (at least for the ext3), don\&#8217;t know for the others. One could imagine testing \&#8221;pure\&#8221; metadata with a \&#8221;touch\&#8221; *nix command, instead of dd, and being careful to make a total of 12GB/4KB = 3 million files+dirs for example.</p>
<p>Furthermore, you mention in the beginning why you are benchmarking metadata, but the fdtree misses completely one very important operation which is \&#8221;stat\&#8221; or a read of the inode, there are some  workloads where you write once and read many (even if it\&#8217;s small files).</p>
<p>this leads to some sugestions:<br />
I have used recently bonnie++ 1.03e, which does also metadata benchmarking including the \&#8221;stat\&#8221; operation.<br />
The iozone tarball includes an exec called fileop (though I never tried it).</p>
<p>Though I imagine that you don\&#8217;t have a long time to do the testing (as some of us can), just to give you an example, one of my last tests with the above bonnie++ version, each run could take between 2 to 4 hours depending on the filesystem, and I also run it 10 times.</p>
<p>finally, if in your future tests you include the read/stat operation, just try out mount the FS with and without atime,diratime.</p>
<p>OK, that\&#8217;s it, sorry if I was too obvious in some things, or to strong, as I said I tried to be constructive, and continue your good work.</p>
<p>I read your other article at a time when I was starting some benchmarks, and I stopped to read first</p>
<p>regards</p>
<p>Mario David</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: laytonjb</title>
		<link>http://www.linux-mag.com/id/7497/#comment-6926</link>
		<dc:creator>laytonjb</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7497/#comment-6926</guid>
		<description>&lt;p&gt;@mdavid,&lt;/p&gt;
&lt;p&gt;I think you have some interesting points but let me explain a few things. &lt;/p&gt;
&lt;p&gt;fdtree, while a simple bash benchmark, also uses all of the cores on my test box. While I didn\&#039;t show the image, I have a picture of gkrellm while the benchmark is running. All 4 cores are being used. I\&#039;m not entirely sure how this works but I think it\&#039;s because of the recursion in the script. But this shows how little I know about bash.&lt;/p&gt;
&lt;p&gt;Second, fdtree is not an all encompassing benchmark. It only tests file and directory create and removal in a specific order. I\&#039;m hoping to test another benchmark named mdtree which also stresses other aspects of metadata performance.&lt;/p&gt;
&lt;p&gt;Third, to be honest, I\&#039;m not sure about the caching aspect of fdtree. Linux might cache the file operations, but since there are so many, I\&#039;m not sure if it does or doesn\&#039;t. Perhaps the recursion affects the caching. Something to look into. (thanks for pointing that out).&lt;/p&gt;
&lt;p&gt;One thing I didn\&#039;t do and should have done was to watch the CPU load during the runs. I sort of watched it using gkrellm but I didn\&#039;t gather any statistics.&lt;/p&gt;
&lt;p&gt;But you do correctly point out that almost any benchmark doesn\&#039;t always stress all aspects that you are interested in. As you correctly point out, fdtree doesn\&#039;t stress stat. Other benchmarks will stress the file systems in a different manner. For example, as you mention, Bonnie++ does stress metadata operations and is perhaps a reasonable benchmark to test.&lt;/p&gt;
&lt;p&gt;Thanks for your comments. They are really appreciated. Don\&#039;t hesitate to post.&lt;/p&gt;
&lt;p&gt;Thanks!&lt;/p&gt;
&lt;p&gt;Jeff
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>@mdavid,</p>
<p>I think you have some interesting points but let me explain a few things. </p>
<p>fdtree, while a simple bash benchmark, also uses all of the cores on my test box. While I didn\&#8217;t show the image, I have a picture of gkrellm while the benchmark is running. All 4 cores are being used. I\&#8217;m not entirely sure how this works but I think it\&#8217;s because of the recursion in the script. But this shows how little I know about bash.</p>
<p>Second, fdtree is not an all encompassing benchmark. It only tests file and directory create and removal in a specific order. I\&#8217;m hoping to test another benchmark named mdtree which also stresses other aspects of metadata performance.</p>
<p>Third, to be honest, I\&#8217;m not sure about the caching aspect of fdtree. Linux might cache the file operations, but since there are so many, I\&#8217;m not sure if it does or doesn\&#8217;t. Perhaps the recursion affects the caching. Something to look into. (thanks for pointing that out).</p>
<p>One thing I didn\&#8217;t do and should have done was to watch the CPU load during the runs. I sort of watched it using gkrellm but I didn\&#8217;t gather any statistics.</p>
<p>But you do correctly point out that almost any benchmark doesn\&#8217;t always stress all aspects that you are interested in. As you correctly point out, fdtree doesn\&#8217;t stress stat. Other benchmarks will stress the file systems in a different manner. For example, as you mention, Bonnie++ does stress metadata operations and is perhaps a reasonable benchmark to test.</p>
<p>Thanks for your comments. They are really appreciated. Don\&#8217;t hesitate to post.</p>
<p>Thanks!</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
</channel>
</rss>