Storage Convergence: Fibre Channel over Ethernet

The ever-increasing demand for storage, network and compute resources for instant access of data and computational requirements lead to the continued growth of the variety of networks found in the data center. HPC is no exception to this trend.

Data centers can be found in multiple markets spanning multiple industries including, financial, government and education, manufacturing, automotive, health care, bio-technology, and high-tech. These data centers host multiple compute, storage, communication and infrastructural equipment for digital storage/retrieval, communications and management of information. The ever-increasing demand for storage, network and compute resources for instant access of data and computational requirements thereof, has in turn lead to the continued growth of the variety of networks found in the data center. HPC is no exception to this trend. Indeed, HPC often pushes the envelope for storage technologies.

The resulting network proliferation has increased IT costs through the need for multiple adapters that connect the servers found at the edge of the network through separate cables to the various networks. As networks proliferate demand is satisfied through the deployment of more servers and adapters. The growth in the number of servers and adapters directly lead to increased power and cooling demands placed on the data center. One of the largest consumers of data center costs is power and cooling, which for many data centers over the life of the equipment, can exceed the capital equipment expenditures.

The three most common types of networks found in the data center are:


  • Storage Area Networks (SAN)

  • Local Area Networks (LAN)

  • Inter-processor Communication (IPC) Networks

As shown in Figure One, a single server may have some or all of these three networks. v

Figure One: Common Server Interconnects
Figure One: Common Server Interconnects

Each network has evolved to solve a specific requirement delivering specific characteristics for its unique traffic type. The SAN network uses Fibre Channel technology to provide a guaranteed in-order, lossless transport for data to be sent to and from storage devices. The LAN network provides the traditional TCP/IP based Ethernet network for best effort data communications. The IPC network is typically used for High Performance Computing (HPC) clustered environments, where multiple servers communicate with each other using low latency messaging.

One of the biggest challenges for the IT administrator is how to reduce the data center costs while maintaining the Service Level Agreements (SLA) and performance that is expected of the network. Many technologies that have attempted to solve this problem have either been relegated to niche markets or have been found inadequate to solve the problem and thus have been discarded.

A promising solution is emerging to solve this problem. The various standards bodies — IEEE, ANSI and IETF — are collectively defining in the realms of their respective domains pieces of this solution that is expected to gain industry-wide adoption. A protocol and frame format called Fibre Channel over Ethernet (FCoE) is being defined by the ANSI T11. FCoE would need to be transported over an improved or enhanced Ethernet transport referred to as Converged Enhanced Ethernet (CEE). The Data Center Bridging (DCB) task force in the IEEE is working on a number of specifications for this protocol.

With this new protocol, comes a new class of adapter called a Converged Network Adapter (CNA). The basic premise of the CNA is to allow multiple networks to connect to a server over the same physical cable using a single adapter. A single physical connection from the CNA that can be used to transport SAN, LAN and IPC traffic can provide a lower Total Cost of Ownership through a single network infrastructure from the server edge to the access layer using common hardware and software management platforms. To achieve a converged I/O medium a widely adopted transport protocol is required.

Fibre Channel over Ethernet Introduction

Fibre Channel over Ethernet (FCoE) in essence is a mechanism to transport Fibre Channel over a DCB-enabled Ethernet infrastructure. A DCB-enabled Ethernet network differs from Ethernet networks widely deployed today for transporting TCP/IP. It avoids the complications of prior technologies which were designed to collapse infrastructures and be the ubiquitous network technology in that it does not require “fork-lifting” the network.

To transport an FCoE frame, a complete Fibre Channel frame is built in the same way as if it was being built for a traditional Fibre Channel network. FCoE layer encapsulates a fully-formed Fibre Channel frame and inserts a defined frame header before it is sent on the CEE link. Figure Two and Figure Three illustrate the FCoE Protocol Stack and the FCoE Frame Format respectively.

Figure Two: FCoE Protocol Stack. FC-2, FC-3, FC-4 are various Fibre Channel services.
Figure Two: FCoE Protocol Stack. FC-2, FC-3, FC-4 are various Fibre Channel services.

FCoE encodes the Start of Frame (SOF) and includes it in the FCoE header prepended to the standard Fibre Channel frame. It then encodes End of Frame (EOF) at the end of the Fibre Channel frame before the frame is handed over to the CEE layer. A header is added and a Cyclic Redundancy Check (CRC) is completed before transmitting the unmodified Fibre Channel frame as data. The type of packet, FCoE or TCP/IP is defined in the type field.

Figure Three: FCoE Frame Format
Figure Three: FCoE Frame Format

As with any new protocol, an FCoE switch is required to expose the CEE and Fibre Channel ports, making forwarding decisions based on the Fibre Channel headers inside the CEE frame. It is similar to how a basic Fibre Channel switch would switch Fibre Channel frames. The only difference is that an FCoE switch supports communication between any ports on the switch, Fibre Channel or CEE. In addition an FCoE switch is also capable of receiving and processing basic TCP/IP traffic on the CEE ports since it is connected to a host—capable of acting as a standard IEEE 802.1Q bridge between all CEE ports in this Layer 2 mode. This operation is very similar to how a multi-layer Internet Protocol (IP) switch performs basic Ethernet switching at Layer 2 (L2) as well as IP routing at Layer 3 between its ports. An FCoE switch treats Fibre Channel traffic as a Layer 3 network protocol, and no LAN traffic is forwarded to Fibre Channel ports in the L2 mode.

An FCoE switch can be connected to other FCoE switches via the CEE ports, and it also can be connected to any existing Fibre Channel switch via the Fibre Channel ports.

Comments on "Storage Convergence: Fibre Channel over Ethernet"

kensandars

Beware the vendor of a legacy technology which starts spouting catch phrases such as “based on established technology”, “derived from mature software”, and “uniquely positioned to enable heterogeneous deployments”. This is marketing blurb and needs to be recognized as such.

With 10GbE you can roll out iSCSI/TCP/IP now and have convergence without buying funky unproven switching technology. Alternatively if your company is heading down the Infiniband path then consider iSCSI/iSER/IB as your storage protocol (or SRP/IB if you prefer).

One of the Fibrechannel requirements is a reliable and predicable physical network. This was the whole reason to invent FC in the first place. Does ethernet technology (when augmented by the NEW features in the article) now provide this? What happens when limits are reached in these new switches which must deployed in order to allow the FCoE traffic to flow? How does this “mature software” react to the different characteristics of the ethernet fabric? How is your company’s financial officer going to react when you tell her these bleeding edge switches must be purchased (at what cost?) rather than submitting a request for more of the same switches that were purchased last time?

Reply
ibh

Another protocol, ATAoE, was very interesting. Simple enough to be implemented in hardware, it was an inventive solution that leveraged the strengths of every technology it involved. FCoE, in comparison, seems like it was levered (eg, with a prybar) to fit the involved technologies. What’s next, IB over WiFi? Really, I’m at a loss to understand what problem this is destined to solve.

Reply
robaronson

If you have worked in the storage arena before you’d kow there are many challenges to be overcome when building out a new storage environment. Many of today’s protocols have limitations such as security and reliability in iSCSI. If you are running a mission critical data base you can’t afford to lose a packet due to congestion or latency in your switch fabric. Fiber channel prevents these problems from disturbing data flow but a significant cost in both hardware and expertise. As an installer of SAN equipment I can attest to the difficulty people have in configuring FC fabrics.

On the other had, when customers build iSCSI storage systems they are often too casual with the deployments and give up a significant level of performance and reliability by using generic ethernet adapters and shared switches that are usually busy doing other tasks.

With the advent of the new FcoE and the new CCE protocols we can reap the benefits of using a well understood infrastructure like ethernet while removing some of the limitations of existing protocols like iSCSI.

Reply
kensandars

Quote: “… limitations such as security and reliability in iSCSI”.
Do you intend to support this ambit claim with specific examples? Are you concerned about the protocol or certain implementations?

Quote: “… I can attest to the difficulty people have in configuring FC fabrics.”
How does FCoE address these difficulties?

Quote: “… often too casual with the deployments …”
Why are customers going to be any less casual with deploying FCoE?

Quote: “If you are running a mission critical data base you can’t afford to lose a packet due to congestion or latency in your switch fabric”.
That’s plain fear-mongering. Packet loss (which happens significantly less frequently than some people try to suggest) is handled by different layers in different protocols. The reality is that unless there is catastrophic system failure the data transfer is delayed (either queued or retransmitted). Customers pay large money for HA solutions to protect data and to ensure continuity of services in the case of system failure.

Quote: “well understood infrastructure like ethernet”
Now your argument implies that ethernet is good enough to transport mission critical data. ;-) But it’s not ethernet; it’s Converged Enhanced Ethernet. It’s a new hack to an existing protocol layer which requires new network hardware to support it. How will non-CEE switches handle the accidental presence of the new (hence unrecognised) frame type? What will customers have to learn to configure this new infrastructure? How much will they have to pay?

Reply

Leave a Reply to robaronson Cancel reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>