<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Beowulf Is Dead?</title>
	<atom:link href="http://www.linux-mag.com/id/7263/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7263/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: Anil kumar</title>
		<link>http://www.linux-mag.com/id/7263/#comment-173597</link>
		<dc:creator>Anil kumar</dc:creator>
		<pubDate>Tue, 27 Mar 2012 17:35:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-173597</guid>
		<description>Mr.Utsumi ppl like u made our life easy and happy. Thanks for your great work. You leave a mark on humanity through your work :).</description>
		<content:encoded><![CDATA[<p>Mr.Utsumi ppl like u made our life easy and happy. Thanks for your great work. You leave a mark on humanity through your work :).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: marklapierre</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6229</link>
		<dc:creator>marklapierre</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6229</guid>
		<description>All this sounds so good.  When are we going to see a good series of articles detailing the construction and provisioning of a &quot;commodity&quot; cluster so that the rest of us can build one too?&lt;br /&gt;
&lt;br /&gt;
There was this nice article a couple of years back on the &quot;Limulus&quot; project http://limulus.basement-supercomputing.com/ that gave some details on the construction of a commodity cluster.  The only mention of software in the article pretty much said that we would have to wait until they figured it out.  We never heard any more about it and the web site doesn&#039;t give us any clue either.</description>
		<content:encoded><![CDATA[<p>All this sounds so good.  When are we going to see a good series of articles detailing the construction and provisioning of a &#8220;commodity&#8221; cluster so that the rest of us can build one too?</p>
<p>There was this nice article a couple of years back on the &#8220;Limulus&#8221; project <a href="http://limulus.basement-supercomputing.com/" rel="nofollow">http://limulus.basement-supercomputing.com/</a> that gave some details on the construction of a commodity cluster.  The only mention of software in the article pretty much said that we would have to wait until they figured it out.  We never heard any more about it and the web site doesn&#8217;t give us any clue either.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: marcopl</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6230</link>
		<dc:creator>marcopl</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6230</guid>
		<description>an updated howto would be nice</description>
		<content:encoded><![CDATA[<p>an updated howto would be nice</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jorro</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6231</link>
		<dc:creator>jorro</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6231</guid>
		<description>We have since 2001 installed approx. 16 HPC Linux clusters in our company. Build inhouse but with prof. server hardware. Some with infiniband ddr but most with std. GigE. Looking at 10GigE but it seems like IB are the best choice for latency aware applications.&lt;br /&gt;
&lt;br /&gt;
Using Panasas Direct Flow on all clients for the large HPC environment. Could not survive without it. &lt;br /&gt;
&lt;br /&gt;
HPC Linux clusters are a the most important tool for our core business regarding depth imaging etc. Could not have done it without&lt;br /&gt;
HPC. Thx to Linux and the Beowulf concept we are more than capable to do the work. &lt;br /&gt;
&lt;br /&gt;
Biggest issues now are power, cooling and space.  &lt;br /&gt;
&lt;br /&gt;
Beowulf is absolutely alive and kicking. Totally agree with the article writer on that.</description>
		<content:encoded><![CDATA[<p>We have since 2001 installed approx. 16 HPC Linux clusters in our company. Build inhouse but with prof. server hardware. Some with infiniband ddr but most with std. GigE. Looking at 10GigE but it seems like IB are the best choice for latency aware applications.</p>
<p>Using Panasas Direct Flow on all clients for the large HPC environment. Could not survive without it. </p>
<p>HPC Linux clusters are a the most important tool for our core business regarding depth imaging etc. Could not have done it without<br />
HPC. Thx to Linux and the Beowulf concept we are more than capable to do the work. </p>
<p>Biggest issues now are power, cooling and space.  </p>
<p>Beowulf is absolutely alive and kicking. Totally agree with the article writer on that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: oouc</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6232</link>
		<dc:creator>oouc</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6232</guid>
		<description>One good howto is worth a thousand general stories.</description>
		<content:encoded><![CDATA[<p>One good howto is worth a thousand general stories.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: highwaytoserfdom</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6233</link>
		<dc:creator>highwaytoserfdom</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6233</guid>
		<description>With the 4 GPGPU  you can get 1TFLOP (double floats) or 4 TFLOPS (single floats) for scientific computing  on a PC with 4 graphic cards for well under 10K.  Well I guess you could cluster Playstation 3 (cell) for about a quarter of the FLOPS for about what 1200 bucks?</description>
		<content:encoded><![CDATA[<p>With the 4 GPGPU  you can get 1TFLOP (double floats) or 4 TFLOPS (single floats) for scientific computing  on a PC with 4 graphic cards for well under 10K.  Well I guess you could cluster Playstation 3 (cell) for about a quarter of the FLOPS for about what 1200 bucks?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: kmarsh</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6234</link>
		<dc:creator>kmarsh</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6234</guid>
		<description>Bait and Switch headlines betray the reader&#039;s confidence, causes their eyes to glaze over and move quickly to another webizine.</description>
		<content:encoded><![CDATA[<p>Bait and Switch headlines betray the reader&#8217;s confidence, causes their eyes to glaze over and move quickly to another webizine.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: deadline</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6235</link>
		<dc:creator>deadline</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6235</guid>
		<description>The Limulus project is still moving forward albeit slowly. Since the article was written, there has been a lot of work on packaging. Some of those involved, including me, believe that there needs to be a low cost, high performance, and power efficient &quot;personal HPC workstation&quot; reference hardware platform before we can develop a portable software platform. The good news is we have a working solution that will provide 8-16 cores using four motherboards, use a single power supply, actively manage power usage, and run quietly next to a desk. Once the hardware is in place, then the software will soon follow. There should be an announcement in the May/June time frame. One thing to remember this is not a &quot;cluster solution&quot; but rather a high performance workstation.</description>
		<content:encoded><![CDATA[<p>The Limulus project is still moving forward albeit slowly. Since the article was written, there has been a lot of work on packaging. Some of those involved, including me, believe that there needs to be a low cost, high performance, and power efficient &#8220;personal HPC workstation&#8221; reference hardware platform before we can develop a portable software platform. The good news is we have a working solution that will provide 8-16 cores using four motherboards, use a single power supply, actively manage power usage, and run quietly next to a desk. Once the hardware is in place, then the software will soon follow. There should be an announcement in the May/June time frame. One thing to remember this is not a &#8220;cluster solution&#8221; but rather a high performance workstation.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: deadline</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6236</link>
		<dc:creator>deadline</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6236</guid>
		<description>Indeed. The problem is, however, the array of hardware and software has become so vast and fluid that trying to capture it all would require a good sized book. Thus the need for a hardware reference (see above) where basic concepts can be understood and enhanced.</description>
		<content:encoded><![CDATA[<p>Indeed. The problem is, however, the array of hardware and software has become so vast and fluid that trying to capture it all would require a good sized book. Thus the need for a hardware reference (see above) where basic concepts can be understood and enhanced.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: deadline</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6237</link>
		<dc:creator>deadline</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6237</guid>
		<description>Darn. You got me.</description>
		<content:encoded><![CDATA[<p>Darn. You got me.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: reacharavindh</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6238</link>
		<dc:creator>reacharavindh</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6238</guid>
		<description>Now that s interesting ... Cluster including the cell processor (PS 3 ) .. The one thing that strikes me immediately is how can one classify the workload and process them accordingly ? say more graphics intensive by the Cell Proc ...  &lt;br /&gt;
&lt;br /&gt;
Note : I am very new to this field ..pl correct if am thinking the other way ..</description>
		<content:encoded><![CDATA[<p>Now that s interesting &#8230; Cluster including the cell processor (PS 3 ) .. The one thing that strikes me immediately is how can one classify the workload and process them accordingly ? say more graphics intensive by the Cell Proc &#8230;  </p>
<p>Note : I am very new to this field ..pl correct if am thinking the other way ..</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: utsumi</title>
		<link>http://www.linux-mag.com/id/7263/#comment-6239</link>
		<dc:creator>utsumi</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7263/#comment-6239</guid>
		<description>I recently noticed this blog.&lt;br /&gt;
&lt;br /&gt;
I think that the very first Beowulf mini supercomputer was built by Max Gilliland of Denelco of Denver, which was the Heterogeneous Element Processor (HEP) with PDP/11&#039;s 50 CPUs stacked up in a metal box.  This was built in early 1970s. (I mention of this fact though I recognize Tom Sterling&#039;s first naming of Beowulf computer in 2001 .)&lt;br /&gt;
&lt;br /&gt;
BTW, Max designed the world largest hybrid computer at Beckman Instrument Co., a decade before of this, which was used by Boeing engineers to design space shuttle, and by M.I.T. scientist to simulate Armstrong&#039;s lunar landing, and by me at Mobil Oil for extraction of Shale oil from Shale rock out of Rocky mountains.&lt;br /&gt;
&lt;br /&gt;
I introduced this technology to NEC, which then produced The Earth Simulator with US$350 million and four tennis court size air conditioned room.  This was to simulate environment of the entire earth with the use of real-life climate data from satellites and ocean buoys. Japanese scientists have already completed a forecast of global ocean temperatures for the next 50 years, and a full set of climate predictions was ready by the end of 2002. â€œSoon, instead of speculating about the possible environmental impact of, say, the Kyoto accord, policymakers will be able to plug its parameters into the virtual Earth, then skip ahead 1,000 years to get a handle on what effect those policies might have. That kind of concrete data could revolutionize environmental science. By digitally cloning the Earth, we might just be able to save it.â€ (TIME.com, â€œBest Inventions, 2002â€)  This Earth Simulator was once the world fastest supercomputer, as the TIME magazine hailed.&lt;br /&gt;
&lt;br /&gt;
Alas, a recent Japanese newspaper reported that NEC decided to terminate their supercomputer business.  Therefore, I was very glad to read your article saying that the Beowulf mini supercomputer has now 82% market share.  This in turn may mean that &quot;Super computer in a single metal box is DEAD!!&quot;&lt;br /&gt;
&lt;br /&gt;
When the HEP concept was initiated in early 1970s, I had another idea of spreading CPUs around the world and interconnecting them via data telecom network (i.e., ARPANET at that time, which can be Internet nowadays) -- see  and .&lt;br /&gt;
&lt;br /&gt;
Best, Tak Utsumi </description>
		<content:encoded><![CDATA[<p>I recently noticed this blog.</p>
<p>I think that the very first Beowulf mini supercomputer was built by Max Gilliland of Denelco of Denver, which was the Heterogeneous Element Processor (HEP) with PDP/11&#8242;s 50 CPUs stacked up in a metal box.  This was built in early 1970s. (I mention of this fact though I recognize Tom Sterling&#8217;s first naming of Beowulf computer in 2001 .)</p>
<p>BTW, Max designed the world largest hybrid computer at Beckman Instrument Co., a decade before of this, which was used by Boeing engineers to design space shuttle, and by M.I.T. scientist to simulate Armstrong&#8217;s lunar landing, and by me at Mobil Oil for extraction of Shale oil from Shale rock out of Rocky mountains.</p>
<p>I introduced this technology to NEC, which then produced The Earth Simulator with US$350 million and four tennis court size air conditioned room.  This was to simulate environment of the entire earth with the use of real-life climate data from satellites and ocean buoys. Japanese scientists have already completed a forecast of global ocean temperatures for the next 50 years, and a full set of climate predictions was ready by the end of 2002. â€œSoon, instead of speculating about the possible environmental impact of, say, the Kyoto accord, policymakers will be able to plug its parameters into the virtual Earth, then skip ahead 1,000 years to get a handle on what effect those policies might have. That kind of concrete data could revolutionize environmental science. By digitally cloning the Earth, we might just be able to save it.â€ (TIME.com, â€œBest Inventions, 2002â€)  This Earth Simulator was once the world fastest supercomputer, as the TIME magazine hailed.</p>
<p>Alas, a recent Japanese newspaper reported that NEC decided to terminate their supercomputer business.  Therefore, I was very glad to read your article saying that the Beowulf mini supercomputer has now 82% market share.  This in turn may mean that &#8220;Super computer in a single metal box is DEAD!!&#8221;</p>
<p>When the HEP concept was initiated in early 1970s, I had another idea of spreading CPUs around the world and interconnecting them via data telecom network (i.e., ARPANET at that time, which can be Internet nowadays) &#8212; see  and .</p>
<p>Best, Tak Utsumi</p>
]]></content:encoded>
	</item>
</channel>
</rss>