<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Breakthroughs of the Pedestrian Nature</title>
	<atom:link href="http://www.linux-mag.com/id/7168/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7168/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: dmpase</title>
		<link>http://www.linux-mag.com/id/7168/#comment-5840</link>
		<dc:creator>dmpase</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7168/#comment-5840</guid>
		<description>Maybe this is where you are going with this topic, but while 1U cases have an element of flexibility, as you point out, they are thermally quite inefficient. The problem, also as you alluded to, is forcing air across the critical components. Energy is proportional to the square of the velocity, and the smaller cases require air to flow at much higher velocities. By ganging cases together you can use larger, more efficient blowers or fans and save a significant amount of energy. This is one of the major effects that is exploited by various blade designs and IBM&#039;s iDataPlex.</description>
		<content:encoded><![CDATA[<p>Maybe this is where you are going with this topic, but while 1U cases have an element of flexibility, as you point out, they are thermally quite inefficient. The problem, also as you alluded to, is forcing air across the critical components. Energy is proportional to the square of the velocity, and the smaller cases require air to flow at much higher velocities. By ganging cases together you can use larger, more efficient blowers or fans and save a significant amount of energy. This is one of the major effects that is exploited by various blade designs and IBM&#8217;s iDataPlex.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: jc2it</title>
		<link>http://www.linux-mag.com/id/7168/#comment-5841</link>
		<dc:creator>jc2it</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7168/#comment-5841</guid>
		<description>I think that air cooled PCs are fine, but inefficient. In a rack situation if you could use industry standard push-to-connect fittings and control the liquid in such a way as not to get everything all messy, think you could setup a quiet and cool rack-mount unit. &lt;br /&gt;
&lt;br /&gt;
If the servers were all hooked to a central liquid cooling unit with a common collection and delivery system it would be easier to use that sterling technology. You would have a Hot collection side and a Cool delivery side. With the servers in between at one end and the cooling/refrigeration source at the other end. &lt;br /&gt;
&lt;br /&gt;
I think with a bit of engineering a liquid cooling standard could be developed and used in many server rooms/data centers/server closets.</description>
		<content:encoded><![CDATA[<p>I think that air cooled PCs are fine, but inefficient. In a rack situation if you could use industry standard push-to-connect fittings and control the liquid in such a way as not to get everything all messy, think you could setup a quiet and cool rack-mount unit. </p>
<p>If the servers were all hooked to a central liquid cooling unit with a common collection and delivery system it would be easier to use that sterling technology. You would have a Hot collection side and a Cool delivery side. With the servers in between at one end and the cooling/refrigeration source at the other end. </p>
<p>I think with a bit of engineering a liquid cooling standard could be developed and used in many server rooms/data centers/server closets.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: gcreager</title>
		<link>http://www.linux-mag.com/id/7168/#comment-5842</link>
		<dc:creator>gcreager</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7168/#comment-5842</guid>
		<description>First things first: Back when the Earth was young and recreation was watching the crust cool, the designation I recall was &quot;1-RU&quot; or even &quot;One Rack Unit&quot;.  After awhile we all got lazy and went to &quot;1-unit&quot;, then even lazier, and started calling &#039;em &quot;1u&quot;.  Less to say, and less to write/type.  A couple of thoughts come to mind.  &lt;br /&gt;
&lt;br /&gt;
When designing a small satellite some time ago, heat dissipation was a considerable issue.  In microgravity, one must accomplish cooling solely via conductive transfer and what comes close to black-body radiative dissipation.  The satellite design was an external aluminum cube formed by layers which each held the electronics boards.  Each board was multi-layer, but the first layer deposited was heavy-gage copper used as a combination common bus and cooling bus, to which all active devices were thermally coupled.  Note that this took some careful work as some active devices had to be selected so that their heat-transfer element &lt;em&gt;&lt;i&gt;could&lt;/i&gt;&lt;/em&gt; be bonded to a common electrical connection.  This thermal transfer bus was thermally connected to the spaceframe structure to transfer heat to the greater thermal mass of the spaceframe.&lt;br /&gt;
&lt;br /&gt;
Taking Doug&#039;s thought here a bit further, thermally bonding to the top plate of the case and becoming a little creative in its subsequent cooling (liquid or airflow) could result in better thermal management.  However, this isn&#039;t a trivial design exercise (although it might be accomplished by the interested student).  &lt;br /&gt;
&lt;br /&gt;
I would like to see a &quot;standard&quot; case that could accept some number (4? 8? 16?) motherboards and use large fans plus some form of liquid cooling to promote a thermal-neutral airflow.  It could get interesting to achieve 1u density as things like power cabling would have to be engineered so one could make all the connections... I suspect this could be accomplished more readily than the pizza box design: I agree that the more you restrict flow by decreasing the space in a case, the more air you have to move across the components to accomplish adequate cooling.  That said, Microsoft and Google had some very interesting results allowing clusters to reside at ambient temperatures, but I couldn&#039;t do that in Texas most of the time.</description>
		<content:encoded><![CDATA[<p>First things first: Back when the Earth was young and recreation was watching the crust cool, the designation I recall was &#8220;1-RU&#8221; or even &#8220;One Rack Unit&#8221;.  After awhile we all got lazy and went to &#8220;1-unit&#8221;, then even lazier, and started calling &#8216;em &#8220;1u&#8221;.  Less to say, and less to write/type.  A couple of thoughts come to mind.  </p>
<p>When designing a small satellite some time ago, heat dissipation was a considerable issue.  In microgravity, one must accomplish cooling solely via conductive transfer and what comes close to black-body radiative dissipation.  The satellite design was an external aluminum cube formed by layers which each held the electronics boards.  Each board was multi-layer, but the first layer deposited was heavy-gage copper used as a combination common bus and cooling bus, to which all active devices were thermally coupled.  Note that this took some careful work as some active devices had to be selected so that their heat-transfer element <em><i>could</i></em> be bonded to a common electrical connection.  This thermal transfer bus was thermally connected to the spaceframe structure to transfer heat to the greater thermal mass of the spaceframe.</p>
<p>Taking Doug&#8217;s thought here a bit further, thermally bonding to the top plate of the case and becoming a little creative in its subsequent cooling (liquid or airflow) could result in better thermal management.  However, this isn&#8217;t a trivial design exercise (although it might be accomplished by the interested student).  </p>
<p>I would like to see a &#8220;standard&#8221; case that could accept some number (4? 8? 16?) motherboards and use large fans plus some form of liquid cooling to promote a thermal-neutral airflow.  It could get interesting to achieve 1u density as things like power cabling would have to be engineered so one could make all the connections&#8230; I suspect this could be accomplished more readily than the pizza box design: I agree that the more you restrict flow by decreasing the space in a case, the more air you have to move across the components to accomplish adequate cooling.  That said, Microsoft and Google had some very interesting results allowing clusters to reside at ambient temperatures, but I couldn&#8217;t do that in Texas most of the time.</p>
]]></content:encoded>
	</item>
</channel>
</rss>