<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Storage Convergence: Fibre Channel over Ethernet</title>
	<atom:link href="http://www.linux-mag.com/id/7234/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.linux-mag.com/id/7234/</link>
	<description>Open Source, Open Standards</description>
	<lastBuildDate>Sat, 05 Oct 2013 13:48:18 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1</generator>
	<item>
		<title>By: kensandars</title>
		<link>http://www.linux-mag.com/id/7234/#comment-6135</link>
		<dc:creator>kensandars</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7234/#comment-6135</guid>
		<description>Beware the vendor of a legacy technology which starts spouting catch phrases such as &quot;based on established technology&quot;, &quot;derived from mature software&quot;, and &quot;uniquely positioned to enable heterogeneous deployments&quot;. This is marketing blurb and needs to be recognized as such.&lt;br /&gt;
&lt;br /&gt;
With 10GbE you can roll out iSCSI/TCP/IP now and have convergence without buying funky unproven switching technology. Alternatively if your company is heading down the Infiniband path then consider iSCSI/iSER/IB as your storage protocol (or SRP/IB if you prefer).&lt;br /&gt;
&lt;br /&gt;
One of the Fibrechannel requirements is a reliable and predicable physical network. This was the whole reason to invent FC in the first place. Does ethernet technology (when augmented by the NEW features in the article) now provide this? What happens when limits are reached in these new switches which must deployed in order to allow the FCoE traffic to flow? How does this &quot;mature software&quot; react to the different characteristics of the ethernet fabric? How is your company&#039;s financial officer going to react when you tell her these bleeding edge switches must be purchased (at what cost?) rather than submitting a request for more of the same switches that were purchased last time?</description>
		<content:encoded><![CDATA[<p>Beware the vendor of a legacy technology which starts spouting catch phrases such as &#8220;based on established technology&#8221;, &#8220;derived from mature software&#8221;, and &#8220;uniquely positioned to enable heterogeneous deployments&#8221;. This is marketing blurb and needs to be recognized as such.</p>
<p>With 10GbE you can roll out iSCSI/TCP/IP now and have convergence without buying funky unproven switching technology. Alternatively if your company is heading down the Infiniband path then consider iSCSI/iSER/IB as your storage protocol (or SRP/IB if you prefer).</p>
<p>One of the Fibrechannel requirements is a reliable and predicable physical network. This was the whole reason to invent FC in the first place. Does ethernet technology (when augmented by the NEW features in the article) now provide this? What happens when limits are reached in these new switches which must deployed in order to allow the FCoE traffic to flow? How does this &#8220;mature software&#8221; react to the different characteristics of the ethernet fabric? How is your company&#8217;s financial officer going to react when you tell her these bleeding edge switches must be purchased (at what cost?) rather than submitting a request for more of the same switches that were purchased last time?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ibh</title>
		<link>http://www.linux-mag.com/id/7234/#comment-6136</link>
		<dc:creator>ibh</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7234/#comment-6136</guid>
		<description>Another protocol, ATAoE, was very interesting.  Simple enough to be implemented in hardware, it was an inventive solution that leveraged the strengths of every technology it involved.  FCoE, in comparison, seems like it was levered (eg, with a prybar) to fit the involved technologies.  What&#039;s next, IB over WiFi?  Really, I&#039;m at a loss to understand what problem this is destined to solve.</description>
		<content:encoded><![CDATA[<p>Another protocol, ATAoE, was very interesting.  Simple enough to be implemented in hardware, it was an inventive solution that leveraged the strengths of every technology it involved.  FCoE, in comparison, seems like it was levered (eg, with a prybar) to fit the involved technologies.  What&#8217;s next, IB over WiFi?  Really, I&#8217;m at a loss to understand what problem this is destined to solve.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: robaronson</title>
		<link>http://www.linux-mag.com/id/7234/#comment-6137</link>
		<dc:creator>robaronson</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7234/#comment-6137</guid>
		<description>If you have worked in the storage arena before you&#039;d kow there are many challenges to be overcome when building out a new storage environment. Many of today&#039;s protocols have limitations such as security and reliability in iSCSI. If you are running a mission critical data base you can&#039;t afford to lose a packet due to congestion or latency in your switch fabric. Fiber channel prevents these problems from disturbing data flow but a significant cost in both hardware and expertise. As an installer of SAN equipment I can attest to the difficulty people have in configuring FC fabrics. &lt;br /&gt;
&lt;br /&gt;
On the other had, when customers build iSCSI storage systems they are often too casual with the deployments and give up a significant level of performance and reliability by using generic ethernet adapters and shared switches that are usually busy doing other tasks.&lt;br /&gt;
&lt;br /&gt;
With the advent of the new FcoE and the new CCE protocols we can reap the benefits of using a well understood infrastructure like ethernet  while removing some of the limitations of existing protocols like iSCSI.</description>
		<content:encoded><![CDATA[<p>If you have worked in the storage arena before you&#8217;d kow there are many challenges to be overcome when building out a new storage environment. Many of today&#8217;s protocols have limitations such as security and reliability in iSCSI. If you are running a mission critical data base you can&#8217;t afford to lose a packet due to congestion or latency in your switch fabric. Fiber channel prevents these problems from disturbing data flow but a significant cost in both hardware and expertise. As an installer of SAN equipment I can attest to the difficulty people have in configuring FC fabrics. </p>
<p>On the other had, when customers build iSCSI storage systems they are often too casual with the deployments and give up a significant level of performance and reliability by using generic ethernet adapters and shared switches that are usually busy doing other tasks.</p>
<p>With the advent of the new FcoE and the new CCE protocols we can reap the benefits of using a well understood infrastructure like ethernet  while removing some of the limitations of existing protocols like iSCSI.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: kensandars</title>
		<link>http://www.linux-mag.com/id/7234/#comment-6138</link>
		<dc:creator>kensandars</dc:creator>
		<pubDate>Wed, 30 Nov -0001 00:00:00 +0000</pubDate>
		<guid isPermaLink="false">http://www.linux-mag.com/id/7234/#comment-6138</guid>
		<description>Quote: &quot;... limitations such as security and reliability in iSCSI&quot;.&lt;br /&gt;
Do you intend to support this ambit claim with specific examples? Are you concerned about the protocol or certain implementations?&lt;br /&gt;
&lt;br /&gt;
Quote: &quot;... I can attest to the difficulty people have in configuring FC fabrics.&quot;&lt;br /&gt;
How does FCoE address these difficulties?&lt;br /&gt;
&lt;br /&gt;
Quote: &quot;... often too casual with the deployments ...&quot;&lt;br /&gt;
Why are customers going to be any less casual with deploying FCoE?&lt;br /&gt;
&lt;br /&gt;
Quote: &quot;If you are running a mission critical data base you canâ€™t afford to lose a packet due to congestion or latency in your switch fabric&quot;.&lt;br /&gt;
That&#039;s plain fear-mongering. Packet loss (which happens significantly less frequently than some people try to suggest) is handled by different layers in different protocols. The reality is that unless there is catastrophic system failure the data transfer is delayed (either queued or retransmitted). Customers pay large money for HA solutions to protect data and to ensure continuity of services in the case of system failure.&lt;br /&gt;
&lt;br /&gt;
Quote: &quot;well understood infrastructure like ethernet&quot;&lt;br /&gt;
Now your argument implies that ethernet is good enough to transport mission critical data. ;-) But it&#039;s not ethernet; it&#039;s Converged Enhanced Ethernet. It&#039;s a new hack to an existing protocol layer which requires new network hardware to support it. How will non-CEE switches handle the accidental presence of the new (hence unrecognised) frame type? What will customers have to learn to configure this new infrastructure? How much will they have to pay?</description>
		<content:encoded><![CDATA[<p>Quote: &#8220;&#8230; limitations such as security and reliability in iSCSI&#8221;.<br />
Do you intend to support this ambit claim with specific examples? Are you concerned about the protocol or certain implementations?</p>
<p>Quote: &#8220;&#8230; I can attest to the difficulty people have in configuring FC fabrics.&#8221;<br />
How does FCoE address these difficulties?</p>
<p>Quote: &#8220;&#8230; often too casual with the deployments &#8230;&#8221;<br />
Why are customers going to be any less casual with deploying FCoE?</p>
<p>Quote: &#8220;If you are running a mission critical data base you canâ€™t afford to lose a packet due to congestion or latency in your switch fabric&#8221;.<br />
That&#8217;s plain fear-mongering. Packet loss (which happens significantly less frequently than some people try to suggest) is handled by different layers in different protocols. The reality is that unless there is catastrophic system failure the data transfer is delayed (either queued or retransmitted). Customers pay large money for HA solutions to protect data and to ensure continuity of services in the case of system failure.</p>
<p>Quote: &#8220;well understood infrastructure like ethernet&#8221;<br />
Now your argument implies that ethernet is good enough to transport mission critical data. ;-) But it&#8217;s not ethernet; it&#8217;s Converged Enhanced Ethernet. It&#8217;s a new hack to an existing protocol layer which requires new network hardware to support it. How will non-CEE switches handle the accidental presence of the new (hence unrecognised) frame type? What will customers have to learn to configure this new infrastructure? How much will they have to pay?</p>
]]></content:encoded>
	</item>
</channel>
</rss>