<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Low Cost/Power HPC</title>
	<atom:link href="http://www.linux-mag.com/id/7799/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7799/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: nir</title>
		<link>http://www.linux-mag.com/id/7799/#comment-8422</link>
		<dc:creator>nir</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7799/#comment-8422</guid>
		<description>&lt;p&gt;The numbers in the comparison are mixed up in places -- the price ratio should be 22:1, the performance ratio 7.7:1, and the TDP 7.3:1
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>The numbers in the comparison are mixed up in places &#8212; the price ratio should be 22:1, the performance ratio 7.7:1, and the TDP 7.3:1</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: febbriaggne</title>
		<link>http://www.linux-mag.com/id/7799/#comment-8423</link>
		<dc:creator>febbriaggne</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7799/#comment-8423</guid>
		<description>&lt;p&gt;quite surprising that the atom is more eficient tahn the Xeon.&lt;br /&gt;
well, a quad core Xeon server is preferable than 22 mini ITX nodes i think.&lt;br /&gt;
One of the problem is the cabling for connecting 22 nodes (if you build them independently), you will need a 24 port ethernet switch along with the cables to achieve the same performance with Xeon, instead of another problems like space requirement, and maintenance.&lt;br /&gt;
Unless, someone start to think about multi processor atom server (a blade with 16 atom processors for example), which is absolutely much more neat.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>quite surprising that the atom is more eficient tahn the Xeon.<br />
well, a quad core Xeon server is preferable than 22 mini ITX nodes i think.<br />
One of the problem is the cabling for connecting 22 nodes (if you build them independently), you will need a 24 port ethernet switch along with the cables to achieve the same performance with Xeon, instead of another problems like space requirement, and maintenance.<br />
Unless, someone start to think about multi processor atom server (a blade with 16 atom processors for example), which is absolutely much more neat.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: deadline</title>
		<link>http://www.linux-mag.com/id/7799/#comment-8424</link>
		<dc:creator>deadline</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7799/#comment-8424</guid>
		<description>&lt;p&gt;Thanks Nir!&lt;/p&gt;
&lt;p&gt;I got my numbers mixed up in my HTML table formatting. I fixed&lt;br /&gt;
the table and now it is correct. I also re-checked all the numbers. (assuming I hit the right keys on my trusty old HP 15C). The big difference is that with POV the Xeon is 7.7 times faster, not 22 as I first reported. While this does not change some of the cost issues I mentioned, it certainly does put a handful Atom within striking distance of the Xeon. &lt;/p&gt;
&lt;p&gt;--Doug
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Thanks Nir!</p>
<p>I got my numbers mixed up in my HTML table formatting. I fixed<br />
the table and now it is correct. I also re-checked all the numbers. (assuming I hit the right keys on my trusty old HP 15C). The big difference is that with POV the Xeon is 7.7 times faster, not 22 as I first reported. While this does not change some of the cost issues I mentioned, it certainly does put a handful Atom within striking distance of the Xeon. </p>
<p>&#8211;Doug</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: geoffwattles</title>
		<link>http://www.linux-mag.com/id/7799/#comment-8425</link>
		<dc:creator>geoffwattles</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7799/#comment-8425</guid>
		<description>&lt;p&gt;Doug,&lt;/p&gt;
&lt;p&gt;Useful to anyone wishing to pursue this further is to examine the work over the decade. Significant &lt;a&gt;papers&lt;/a&gt; by  W. Feng at Los Alamos justify small clusters. Also, though not realized as yet, was the proposed Atom cluster &lt;a&gt;\&quot;Molecule\&quot;&lt;/a&gt; at SGI.&lt;/p&gt;
&lt;p&gt;Thanks for covering this. I think you are on to important subject -- I remember the Cray 1 hiding it refrigeration 10x its size in the floor and benches surrounding it.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Doug,</p>
<p>Useful to anyone wishing to pursue this further is to examine the work over the decade. Significant <a>papers</a> by  W. Feng at Los Alamos justify small clusters. Also, though not realized as yet, was the proposed Atom cluster <a>\&#8221;Molecule\&#8221;</a> at SGI.</p>
<p>Thanks for covering this. I think you are on to important subject &#8212; I remember the Cray 1 hiding it refrigeration 10x its size in the floor and benches surrounding it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: linxmax</title>
		<link>http://www.linux-mag.com/id/7799/#comment-8426</link>
		<dc:creator>linxmax</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7799/#comment-8426</guid>
		<description>&lt;p&gt;Well, I too was wondering whether the idea of disposing HPC’s would work with &lt;a href=&quot;http://www.coop-systems.com/&quot;&gt;bcm software&lt;/a&gt; as you said in your previous post. The main advantage of it from what I see is that it can run really slow as it has its base on nodes which are pretty cheap by the way. Anyway, before we come to a conclusion that it would work as we expect, we need to take some factors into account. Since it is a low cost node, chances are that it would often get damaged and a need of constant replacement is going to be problem.
&lt;/p&gt;
</description>
		<content:encoded><![CDATA[<p>Well, I too was wondering whether the idea of disposing HPC’s would work with <a href="http://www.coop-systems.com/">bcm software</a> as you said in your previous post. The main advantage of it from what I see is that it can run really slow as it has its base on nodes which are pretty cheap by the way. Anyway, before we come to a conclusion that it would work as we expect, we need to take some factors into account. Since it is a low cost node, chances are that it would often get damaged and a need of constant replacement is going to be problem.</p>
]]></content:encoded>
	</item>
</channel>
</rss>