<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: SandForce 1222 SSD Testing &#8211; Part 2: Initial IOPS Results</title>
	<atom:link href="http://www.linux-mag.com/id/8510/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/8510/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: lliseil</title>
		<link>http://www.linux-mag.com/id/8510/#comment-727487</link>
		<dc:creator>lliseil</dc:creator>
		<pubDate>Fri, 18 Jan 2013 20:29:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8510#comment-727487</guid>
		<description>Thank you Jeff for publishing this rare and detailed test.

&gt; The SSD ... compresses the data before storing it [and, as I understand it, decompress it before passing it over to the applications]
&gt;  If your data is compressible then [I as uderstand Sandforce-based SSD will give you top-of-the-notch R/W performances along with some storages benefit]

Now, does &quot;compressibility&quot; and &quot;compression ratio&quot; compare between Sandforce-based SSD controller and e.g. &#039;7z&#039; or &#039;tar gzip&#039;? I couldn&#039;t find the answer yet.

Here is how compressible are some of the main directories on my linux box (tar gzip):
Path : uncompressed --&gt; tar.gz compressed  (real compressibility in %)
/var/log/journal : 590 MB --&gt; 124M  (79%)
So when stored on a Sandforce based SSD, these same files will take _about_ the same low amount of space?
/var/cache : 563 MB --&gt; 560M  ( 0%)
/var/lib : 56 MB --&gt; 17M  (70%)
/usr/bin : 340 MB --&gt; 133M  (60%)
/usr/lib : 1571 MB --&gt; 658M  (58%)
Or for a single app, e.g open source browser Mozilla Firefox:
/usr/lib/firefox/ : 44M --&gt;  23M  (48%)
Firefox user&#039;s profile (with quite a few add-ons) : 38M --&gt;  13M  (66%)
So real average compressibility is always over 50% --with tar and gzip (but for the packages&#039; cache which isn&#039;t used 10 hours a day anyway). Man, not bad at all!

So let says I&#039;m looking for an SSD to make any app and daemons just load smooth and fast (and reliably ;) on the main PC I _work_ with. I&#039;ll put only the system and users&#039; configuration files on the SSD (while /tmp will be in RAM, swap and data on the HDDs). If &quot;Sandforce compression&quot; is kind of comparable to &quot;tar gzip compression&quot; well, then I should run for a a Sandforce-based, synchronous memory SSD without a glitch of hesitation. And happily let the non Sandforce SSD controller like Crucial or Samsung&#039; for Audio/Video/Games/Windows based uses.
Or maybe Sandforce&#039;s &quot;compressibility&quot; has a rather different meaning than that of standard file&#039;s compression, then it&#039;d be great to have it explained somehow. As without it it is very misleading the average Joe.

Another question I have --and pardon my ignorance-- but how does the Sandforce controller deal with multiple parallel compression and decompression of hundred of files, in real life multi-tasking station (from writing to the logs to loading various apps&#039; binaries, libs and configuration files plus backup/synchronization)?</description>
		<content:encoded><![CDATA[<p>Thank you Jeff for publishing this rare and detailed test.</p>
<p>&gt; The SSD &#8230; compresses the data before storing it [and, as I understand it, decompress it before passing it over to the applications]<br />
&gt;  If your data is compressible then [I as uderstand Sandforce-based SSD will give you top-of-the-notch R/W performances along with some storages benefit]</p>
<p>Now, does &#8220;compressibility&#8221; and &#8220;compression ratio&#8221; compare between Sandforce-based SSD controller and e.g. &#8217;7z&#8217; or &#8216;tar gzip&#8217;? I couldn&#8217;t find the answer yet.</p>
<p>Here is how compressible are some of the main directories on my linux box (tar gzip):<br />
Path : uncompressed &#8211;&gt; tar.gz compressed  (real compressibility in %)<br />
/var/log/journal : 590 MB &#8211;&gt; 124M  (79%)<br />
So when stored on a Sandforce based SSD, these same files will take _about_ the same low amount of space?<br />
/var/cache : 563 MB &#8211;&gt; 560M  ( 0%)<br />
/var/lib : 56 MB &#8211;&gt; 17M  (70%)<br />
/usr/bin : 340 MB &#8211;&gt; 133M  (60%)<br />
/usr/lib : 1571 MB &#8211;&gt; 658M  (58%)<br />
Or for a single app, e.g open source browser Mozilla Firefox:<br />
/usr/lib/firefox/ : 44M &#8211;&gt;  23M  (48%)<br />
Firefox user&#8217;s profile (with quite a few add-ons) : 38M &#8211;&gt;  13M  (66%)<br />
So real average compressibility is always over 50% &#8211;with tar and gzip (but for the packages&#8217; cache which isn&#8217;t used 10 hours a day anyway). Man, not bad at all!</p>
<p>So let says I&#8217;m looking for an SSD to make any app and daemons just load smooth and fast (and reliably ;) on the main PC I _work_ with. I&#8217;ll put only the system and users&#8217; configuration files on the SSD (while /tmp will be in RAM, swap and data on the HDDs). If &#8220;Sandforce compression&#8221; is kind of comparable to &#8220;tar gzip compression&#8221; well, then I should run for a a Sandforce-based, synchronous memory SSD without a glitch of hesitation. And happily let the non Sandforce SSD controller like Crucial or Samsung&#8217; for Audio/Video/Games/Windows based uses.<br />
Or maybe Sandforce&#8217;s &#8220;compressibility&#8221; has a rather different meaning than that of standard file&#8217;s compression, then it&#8217;d be great to have it explained somehow. As without it it is very misleading the average Joe.</p>
<p>Another question I have &#8211;and pardon my ignorance&#8211; but how does the Sandforce controller deal with multiple parallel compression and decompression of hundred of files, in real life multi-tasking station (from writing to the logs to loading various apps&#8217; binaries, libs and configuration files plus backup/synchronization)?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nouog</title>
		<link>http://www.linux-mag.com/id/8510/#comment-27401</link>
		<dc:creator>nouog</dc:creator>
		<pubDate>Tue, 08 Nov 2011 07:08:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8510#comment-27401</guid>
		<description>A nice article with some very good commentary below, even the recursive commentary on the commentary was good.  More please!&lt;a href=&quot;http://www.pelletmillguide.com/wood_pellets_making_is_a_bright_business_in_austria.html&quot; rel=&quot;nofollow&quot;&gt;wood pellet business in Austria&lt;/a&gt;</description>
		<content:encoded><![CDATA[<p>A nice article with some very good commentary below, even the recursive commentary on the commentary was good.  More please!<a href="http://www.pelletmillguide.com/wood_pellets_making_is_a_bright_business_in_austria.html" rel="nofollow">wood pellet business in Austria</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: hydorah</title>
		<link>http://www.linux-mag.com/id/8510/#comment-9475</link>
		<dc:creator>hydorah</dc:creator>
		<pubDate>Fri, 29 Apr 2011 13:31:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/?p=8510#comment-9475</guid>
		<description>IOZone is a great tool, I discovered while doing a University project that involves file system performance testing - Great results easy to work with!

Spread the word!!!</description>
		<content:encoded><![CDATA[<p>IOZone is a great tool, I discovered while doing a University project that involves file system performance testing &#8211; Great results easy to work with!</p>
<p>Spread the word!!!</p>
]]></content:encoded>
	</item>
</channel>
</rss>