<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: SandForce 1222 SSD Testing, Part 1: Initial Throughput Results</title>
	<atom:link href="http://www.linux-mag.com/id/8477/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/8477/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: FiberYes</title>
		<link>http://www.linux-mag.com/id/8477/#comment-980989</link>
		<dc:creator>FiberYes</dc:creator>
		<pubDate>Tue, 04 Jun 2013 08:35:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-980989</guid>
		<description>Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)&lt;a href=&quot;http://www.fiberyes.com/&quot; rel=&quot;nofollow&quot;&gt;FiberYes&lt;/a&gt;</description>
		<content:encoded><![CDATA[<p>Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)<a href="http://www.fiberyes.com/" rel="nofollow">FiberYes</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: venu a</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9479</link>
		<dc:creator>venu a</dc:creator>
		<pubDate>Sat, 30 Apr 2011 03:21:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9479</guid>
		<description>This article has given very nice heads up on IOPS &amp; SSD controllers.</description>
		<content:encoded><![CDATA[<p>This article has given very nice heads up on IOPS &amp; SSD controllers.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Edward Overton</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9434</link>
		<dc:creator>Edward Overton</dc:creator>
		<pubDate>Wed, 20 Apr 2011 13:12:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9434</guid>
		<description>I was using windose at that time :(. The drive would go off-line and I would get BSD or sometimes would reboot and halt at &quot;Could not find bootable drive&quot;.  I would power off, wait!  Then power on, everything was then ok.  I confirmed mine was Adata by small manual the drive came and googling.  The issue did not look like it was OS related.</description>
		<content:encoded><![CDATA[<p>I was using windose at that time :(. The drive would go off-line and I would get BSD or sometimes would reboot and halt at &#8220;Could not find bootable drive&#8221;.  I would power off, wait!  Then power on, everything was then ok.  I confirmed mine was Adata by small manual the drive came and googling.  The issue did not look like it was OS related.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeffrey Layton</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9395</link>
		<dc:creator>Jeffrey Layton</dc:creator>
		<pubDate>Mon, 11 Apr 2011 12:49:33 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9395</guid>
		<description>Was this the same MicroCenter drive that I tested? I was told it was an Adata drive but I haven&#039;t been able to confirm that.

What were the symptoms of the drive going off-line? What distro/kernel were you using?

Thanks!

Jeff</description>
		<content:encoded><![CDATA[<p>Was this the same MicroCenter drive that I tested? I was told it was an Adata drive but I haven&#8217;t been able to confirm that.</p>
<p>What were the symptoms of the drive going off-line? What distro/kernel were you using?</p>
<p>Thanks!</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Edward Overton</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9372</link>
		<dc:creator>Edward Overton</dc:creator>
		<pubDate>Wed, 06 Apr 2011 15:16:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9372</guid>
		<description>I had issues with my drive going off-line randomly.  Seems it was a bios issue.  But which bios? see http://ssdtechnologyforum.com/threads/835-Sandforce-SSD-Firmware-Version-Confusion.  So I ugraded my bios from Adata site.  The drive does not have the issue anymore.</description>
		<content:encoded><![CDATA[<p>I had issues with my drive going off-line randomly.  Seems it was a bios issue.  But which bios? see <a href="http://ssdtechnologyforum.com/threads/835-Sandforce-SSD-Firmware-Version-Confusion" rel="nofollow">http://ssdtechnologyforum.com/threads/835-Sandforce-SSD-Firmware-Version-Confusion</a>.  So I ugraded my bios from Adata site.  The drive does not have the issue anymore.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: detroitgeek</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9343</link>
		<dc:creator>detroitgeek</dc:creator>
		<pubDate>Sun, 03 Apr 2011 14:46:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9343</guid>
		<description>I have been looking at SSD to put my OS on, and plan on having my home directory on a standard drive.  I worry about the lifetime of the SSD under these conditions because of all of the writing the OS does.  My /var/ directory would also be on a standard drive.  Is my concern realistic?</description>
		<content:encoded><![CDATA[<p>I have been looking at SSD to put my OS on, and plan on having my home directory on a standard drive.  I worry about the lifetime of the SSD under these conditions because of all of the writing the OS does.  My /var/ directory would also be on a standard drive.  Is my concern realistic?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Christian Storm</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9338</link>
		<dc:creator>Christian Storm</dc:creator>
		<pubDate>Fri, 01 Apr 2011 18:25:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9338</guid>
		<description>I&#039;ve been reading about and testing SSDs for years and am finally leaving my first comment.  I&#039;m doing so because none of the benchmarks I&#039;ve read test the achilles heal of SSDs which happens to be our production workload.

I would suggest doing a mixed random read/write workload with a 64GB file (full extend of the drive) with 4k write size that runs for a long time to arrive at steady state behavior, e.g., a day.  When I was working with their engineers when beta&#039;ing the FusionIOs IOdrive, they said this is the most toturesome workload they&#039;ve ever seen.  They had to make a number of changes to the driver for us as a result.  Caches get quickly overwhelmed, wear leveling/grooming quickly gets pinned shuffling blocks around and can lead to huge periodic drops in performance unless they are amortized over time (SSDs are over provisioned under the hood to help with this), block aligning/elevator algorithms don&#039;t help due to the randomness, the small IO size kills throughput, the mixed nature of the IOPS r/w (especially when done in parallel) can cause havoc with the rewrite algorithm, etc..  The dirty little secret in the industry is to quote inflated random IOPS performance using a file that is 1/4-1/3 the size of the drive.

Another surprise that we&#039;ve found during testing is how drives perform as you increase the number of parallel read/write threads.  With Fusion, for instance, it doesn&#039;t make much of a difference positively or negatively.  Virident&#039;s tachIOn drive, however, tripled in performance!  We were blown away.  FYI this is the best SSD we&#039;ve tested to date.

Ok, that was cathartic :)  Thanks for letting me rant.

Thanks for the great article and I look forward to the rest.</description>
		<content:encoded><![CDATA[<p>I&#8217;ve been reading about and testing SSDs for years and am finally leaving my first comment.  I&#8217;m doing so because none of the benchmarks I&#8217;ve read test the achilles heal of SSDs which happens to be our production workload.</p>
<p>I would suggest doing a mixed random read/write workload with a 64GB file (full extend of the drive) with 4k write size that runs for a long time to arrive at steady state behavior, e.g., a day.  When I was working with their engineers when beta&#8217;ing the FusionIOs IOdrive, they said this is the most toturesome workload they&#8217;ve ever seen.  They had to make a number of changes to the driver for us as a result.  Caches get quickly overwhelmed, wear leveling/grooming quickly gets pinned shuffling blocks around and can lead to huge periodic drops in performance unless they are amortized over time (SSDs are over provisioned under the hood to help with this), block aligning/elevator algorithms don&#8217;t help due to the randomness, the small IO size kills throughput, the mixed nature of the IOPS r/w (especially when done in parallel) can cause havoc with the rewrite algorithm, etc..  The dirty little secret in the industry is to quote inflated random IOPS performance using a file that is 1/4-1/3 the size of the drive.</p>
<p>Another surprise that we&#8217;ve found during testing is how drives perform as you increase the number of parallel read/write threads.  With Fusion, for instance, it doesn&#8217;t make much of a difference positively or negatively.  Virident&#8217;s tachIOn drive, however, tripled in performance!  We were blown away.  FYI this is the best SSD we&#8217;ve tested to date.</p>
<p>Ok, that was cathartic :)  Thanks for letting me rant.</p>
<p>Thanks for the great article and I look forward to the rest.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: sdanpo</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9334</link>
		<dc:creator>sdanpo</dc:creator>
		<pubDate>Fri, 01 Apr 2011 15:53:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9334</guid>
		<description>Excellent article!

I liked the thoroughness of the test and the great data derived form it.
Looking forward to the coming parts.

Disclaimer : The comment is written by an Anobit Employee.
Anobit is an Enterprise SSD vendor with Data-pattern-Agnostic behavior.</description>
		<content:encoded><![CDATA[<p>Excellent article!</p>
<p>I liked the thoroughness of the test and the great data derived form it.<br />
Looking forward to the coming parts.</p>
<p>Disclaimer : The comment is written by an Anobit Employee.<br />
Anobit is an Enterprise SSD vendor with Data-pattern-Agnostic behavior.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jeffrey Layton</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9332</link>
		<dc:creator>Jeffrey Layton</dc:creator>
		<pubDate>Fri, 01 Apr 2011 13:44:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9332</guid>
		<description>Stay tuned! That is my plan for the last part in this series.

The next part will cover initial IOPS performance. Part 3 will cover a more in-depth throughput study (and comparison to an Intel SSD). Part 4 will do the same more in-depth study and comparison but for IOPS. Then Part 5 will compare the 2.6.32 kernel to the latest kernel (probably 2.6.37 but maybe 2.6.38 if it comes out).

Jeff</description>
		<content:encoded><![CDATA[<p>Stay tuned! That is my plan for the last part in this series.</p>
<p>The next part will cover initial IOPS performance. Part 3 will cover a more in-depth throughput study (and comparison to an Intel SSD). Part 4 will do the same more in-depth study and comparison but for IOPS. Then Part 5 will compare the 2.6.32 kernel to the latest kernel (probably 2.6.37 but maybe 2.6.38 if it comes out).</p>
<p>Jeff</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: PJ Welsh</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9330</link>
		<dc:creator>PJ Welsh</dc:creator>
		<pubDate>Fri, 01 Apr 2011 13:13:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9330</guid>
		<description>Ahh forget 2.6.37! Add the &lt;a href=&quot;http://elrepo.org/tiki/tiki-index.php?page=kernel-ml&quot; rel=&quot;nofollow&quot;&gt;mainline kernel&lt;/a&gt; tracker repo with version 2.6.38 (currently) from the GREAT folks at &lt;a href=&quot;http://elrepo.org/&quot; rel=&quot;nofollow&quot;&gt;ElRepo&lt;/a&gt;</description>
		<content:encoded><![CDATA[<p>Ahh forget 2.6.37! Add the <a href="http://elrepo.org/tiki/tiki-index.php?page=kernel-ml" rel="nofollow">mainline kernel</a> tracker repo with version 2.6.38 (currently) from the GREAT folks at <a href="http://elrepo.org/" rel="nofollow">ElRepo</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: solanum</title>
		<link>http://www.linux-mag.com/id/8477/#comment-9329</link>
		<dc:creator>solanum</dc:creator>
		<pubDate>Fri, 01 Apr 2011 12:57:43 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8477#comment-9329</guid>
		<description>Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)</description>
		<content:encoded><![CDATA[<p>Can you rerun the performance tests with a 2.37.x kernel? They made a number of changes to the kernel in the block device layer, and I am wondering how much that impacts the performance. :)</p>
]]></content:encoded>
	</item>
</channel>
</rss>