Questions on the 2017 Ethernet Roadmap for Networked Storage

Fred Zhang

Jan 9, 2017

title of post

Last month, experts from Dell EMC, Intel, Mellanox and Microsoft convened to take a look ahead at what’s in store for Ethernet Networked Storage this year. It was a fascinating discussion of anticipated updates. If you missed the webcast, “2017 Ethernet Roadmap for Networked Storage,” it’s now available on-demand. We had a lot of great questions during the live event and we ran out of time to address them all, so here are answers from our speakers.

Q. What’s the future of twisted pair cable? What is the new speed being developed with twisted pair cable?

A. By twisted pair I assume you mean USTP CAT5,6,7 etc.  The problem going forward with high speed signaling is the USTP stands for Un-Shielded and the signal radiates off the wire very quickly.   At 25G and 50G this is a real problem and forces the line card end to have a big, power consuming and costly chip to dig the signal out of the noise. Anything can be done, but at what cost.  25G BASE-T is being developed but the reach is somewhere around 30 meters.  Cost, size, power consumption are all going up and reach going down – all opposite to the trends in modern high speed data centers.  BASE-T will always have a place for those applications that don’t need the faster rates.

Q. What do you think of RCx standards and cables?

A. So far, Amphenol, JAE and Volex are the suppliers who are members of the MSA. Very few companies have announced or discussed RCx.  In addition to a smaller connector, not having an EEPROM eliminates steps in the cable assembly manufacture, hence helping with lowering the cost when compared to traditional DAC cabling. The biggest advantage of RCx is that it can help eliminate bulky breakout cables within a rack since a single RCx4 receptacle can accept a number of combinations of single lane, 2 lane or 4 lane cable with the same connector on the host. RCx ports can be connected to existing QSFP/SFP infrastructure with appropriate cabling. It remains to be seen, however, if it becomes a standard and popular product or remain as a custom solution.

Q. How long does AOC normally reach, 3m or 30m?  

A. AOCs pick it up after DAC drops off about 3m.  Most popular reaches are 3,5,and 10m and volume drops rapidly after 15,20,30,50, and100. We are seeing Ethernet connected HDD’s at 2.5GbE x 2 ports, and Ceph touting this solution.  This seems to play well into the 25/50/100GbE standards with the massive parallelism possible.

Q. How do we scale PCIe lanes to support NVMe drives to scale, and to replace the capacity we see with storage arrays populated completely with HDDs?

A. With the advent of PCIe Gen 4, the per-lane rate of PCIe is going from 8 GT/s to 16GT/s. Scaling of PCIe is already happening.

Q. How many NVMe drives does it take to saturate 100GbE?

A. 3 or 4 depending on individual drives.

Q. How about the reliability of Ethernet? A lot of people think Fibre Channel has better reliability than Ethernet.

A. It’s true that Fibre Channel is a lossless protocol. Ethernet frames are sometimes dropped by the switch, however, network storage using TCP has built in error-correction facility. TCP was designed at a time when networks were less robust than today. Ethernet networks these days are far more reliable.

Q. Do the 2.5GbE and 5GbE refer to the client side Ethernet port or the server Ethernet port?

A. It can exist on both the client side and the server side Ethernet port.

Q. Are there any 25GbE or 50GbE NICs available on the market?

A. Yes, there are many that are on the market from a number of vendors, including Dell, Mellanox, Intel, and a number of others.

Q. Commonly used Ethernet speeds are either 10GbE or 40GbE. Do the new 25GbE and 50GbE require new switches?

A. Yes, you need new switches to support 25GbE and 50GbE. This is, in part, because the SerDes rate per lane at 25 and 50GbE is 25Gb/s, which is not supported by the 10 and 40GbE switches with a maximum SerDes rate of 10Gb/s.

Q. With a certain number of SerDes coming off the switch ASIC, which would you prefer to use 100G or 40G if assuming both are at the same cost?

A. Certainly 100G. You get 2.5X the bandwidth for the same cost under the assumptions made in the question.

Q. Are there any 100G/200G/400G switches and modulation available now?

A. There are many 100G Ethernet switches available on the market today include Dell’s Z9100 and S6100, Mellanox’s SN2700, and a number of others. The 200G and 400G IEEE standards are not complete as of yet. I’m sure all switch vendors will come out with switches supporting those rates in the future.

Q. What does lambda mean?

ALambda is the symbol for wavelength.

Q. Is the 50GbE standard ratified now?

A. IEEE 802.3 just recently started development of a 50GbE standard based upon a single-lane 50 Gb/s physical layer interface. That standard is probably about 2 years away from ratification. The 25G Ethernet Consortium has a ratified specification for 50GbE based upon a dual-lane 25 Gb/s physical layer interface.

Q. Are there any parallel options for using 2 or 4 lanes like in 128GFCp?

A. Many Ethernet specifications are based upon parallel options. 10GBASE-T is based upon 4 twisted-pairs of copper cabling. 100GBASE-SR4 is based upon 4 lanes (8 fibers) of multimode fiber. Even the industry MSA for 100G over CWDM4 is based upon four wavelengths on a duplex single-mode fiber. In some instances, the parallel option is based upon the additional medium (extra wires or fibers) but with fiber optics, parallel can be created by using different wavelengths that don’t interfere with each other.

 

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

SNIA Storage Developer Conference-The Knowledge Continues

khauser

Oct 13, 2016

title of post
SNIA's 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions;  Cloud Interoperability, Kinetiplugfest 5c Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees.  Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind.  Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions. For those not familiar with SDC, this technical industry event is designed for a variety of storage technologists at various levels from developers to architects to product managers and more.  And, true to SNIA's commitment to educating the industry on current and future disruptive technologies, SDC content is now available to all - whether you attended or not - for download and viewing. 20160919_120059You'll want to stream keynotes from Citigroup, Toshiba, DSSD, Los Alamos National Labs, Broadcom, Microsemi, and Intel - they're available now on demand on SNIA's YouTube channel, SNIAVideo. All SDC presentations are now available for download; and over the next few months, you can continue to download SDC podcasts which combine audio and slides. The first podcast from SDC 2016 - on hyperscaler (as well as all 2015 SDC Podcasts) are available here, and more will be available in the coming weeks. SNIA thanks all its members and colleagues who contributed to make SDC a success! A special thanks goes out to the SNIA Technical Council, a select group of acknowledged industry experts who work to guide SNIA technical efforts. In addition to driving the agenda and content for SDC, the Technical Council oversees and manages SNIA Technical Work Groups, reviews architectures submitted by Work Groups, and is the SNIA's technical liaison to standards organizations. Learn more about these visionary leaders at http://www.snia.org/about/organization/tech_council. And finally, don't forget to mark your calendars now for SDC 2017 - September 11-14, 2017, again at the Hyatt Regency Santa Clara. Watch for the Call for Presentations to open in February 2017.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Securing Fibre Channel Storage

khauser

Jul 21, 2016

title of post
by Eric Hibbard, SNIA Storage Security TWG Chair, and SNIA Storage Security TWG team members Fibre Channel is often viewed as a specialized form of networking that lives within data centers and which neither has, or requires, special security protections. Neither of these assumptions is true, but finding the appropriate details to secure Fibre Channel infrastructure can be challenging.summit2 The ISO/IEC 27040:2015 Information technology - Security techniques - Storage Security standard provides detailed technical guidance in securing storage systems and ecosystems. However, while the coverage of this standard is quite broad, it lacks details for certain important topics. ISO/IEC 27040:2015 addresses storage security risks and threats at a high level. This blog is written in the context of Fibre Channel. The following list is a summary of the major threats that may confront Fibre Channel implementations and deployments.
  1. Storage Theft: Theft of storage media or storage devices can be used to access data as well as to deny legitimate use of the data.
  2. Sniffing Storage Traffic: Storage traffic on dedicated storage networks or shared networks can be sniffed via passive network taps or traffic monitoring revealing data, metadata, and storage protocol signaling. If the sniffed traffic includes authentication details, it may be possible for the attacker to replay9 (retransmit) this information in an attempt to escalate the attack.
  3. Network Disruption: Regardless of the underlying network technology, any software or congestion disruption to the network between the user and the storage system can degrade or disable storage.
  4. WWN Spoofing: An attacker gains access to a storage system in order to access/modify/deny data or metadata.
  5. Storage Masquerading: An attacker inserts a rogue storage device in order to access/modify/deny data or metadata supplied by a host.
  6. Corruption of Data: Accidental or intentional corruption of data can occur when the wrong hosts gain access to storage.
  7. Rogue Switch: An attacker inserts a rogue switch in order to perform reconnaissance on the fabric (e.g., configurations, policies, security parameters, etc.) or facilitate other attacks.
  8. Denial of Service (DoS): An attacker can disrupt, block or slow down access to data in a variety of ways by flooding storage networks with error messages or other approaches in an attempt to overload specific systems within the network.
A core element of Fibre Channel security is the ANSI INCITS 496-2012, Information Technology - Fibre Channel - Security Protocols - 2 (FC-SP-2) standard, which defines protocols to authenticate Fibre Channel entities, set up session encryption keys, negotiate parameters to ensure frame-by-frame integrity and confidentiality, and define and distribute policies across a Fibre Channel fabric. It is also worth noting that FC-SP-2 includes compliance elements, which is somewhat unique for FC standards. Fibre Channel fabrics may be deployed across multiple, distantly separated sites, which make it critical that security services be available to assure consistent configurations and proper access controls.

A new whitepaper, one in a series from SNIA that addresses various elements of storage security, is intended to leverage the guidance in the ISO/IEC 27040 standard and enhance it with a specific focus on Fibre Channel (FC) security.   To learn more about security and Fibre Channel, please visit www.snia.org/security and download the Storage Security: Fibre Channel Security whitepaper.

And mark your calendar for presentations and discussions on this important topic at the upcoming SNIA Data Storage Security Summit, September 22, 2016, at the Hyatt Regency Santa Clara CA. Registration is complimentary - go to www. http://www.snia.org/dss-summit for details on how you can attend and get involved in the conversation.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Q&A on Exactly How iSCSI has Evolved

David Fair

Jun 3, 2016

title of post

Our recent SNIA ESF Webcast, “The Evolution of iSCSI” drew a big and diverse group of attendees. From beginners looking for iSCSI basics, to experts with a lot of iSCSI deployment experience, there were plenty of good questions. Our presenters, Andy Banta and Fred Knight, did a great job answering as many as they could during the live event, but we didn’t have time to get to them all. So here are answers to them all. And by the way, if you missed the Webcast, it’s now available on-demand.

Q. What are the top 3 reasons to choose iSCSI over FC SAN?

A. 1. Use of commodity equipment and protocols. It means that you don’t have to set up a completely separate network. It means you don’t have to buy separate HBAs. 2. Inherent networking capability. Built on top of TCP/IP, it benefits from any networking technology to come along. These include routing, tunneling, authentication, encryption, etc. 3. Ease of automation and configuration. In it’s simplest form, an iSCSI host only needs to know the IP address of the target system. In more complex systems, hosts and storage provide APIs to allow automation through scripting or management tools.

Q. Please comment on why SCSI went from being a widely used protocol for all sorts of devices to being focused as only essentially a storage protocol?

A. SCSI was originally designed as both a protocol and a bus (original Parallel SCSI). Because there were no other busses, the SCSI bus did it all; disks, tapes, scanners, printers, Optical (CDs), media changers, etc. As other busses came onto the market (think USB), many of those devices moved to the new bus (CDs, printers, scanners, etc.) Commodity devices used commodity busses (IDE, SATA, USB), and enterprise devices used enterprise busses (FC, SAS); and so, disks, tapes, and media changers mostly stayed on SCSI.

The name SCSI can be confusing for some, as the term originally was used for both the SCSI protocol and the SCSI bus. The term for the SCSI protocol is all that remains today; the SCSI bus (the old SCSI parallel bus) is no longer in wide use. Today, the FC bus, or the SAS bus, or the SoP bus, or the SRP bus are used to carry the SCSI protocol. The SCSI Architecture Model (SAM) describes a very distinct separation between the device layer (the SCSI protocol) and the transport layer (the bus).

And, the SCSI command set has become the basis for many subsequent command sets. The JEDEC group used the SCSI command set as a model (JEDEC devices are in your cell phone), the ATAPI devices used SCSI commands, and many SCSI commands and SATA commands have a common heritage. The Mt. Fuji group (a standards group in Japan) also uses SCSI as the basis for new DVD and BlueRay devices. So, while not widely known, the SCSI command family has grown well beyond what is managed by the ANSI/INCITS T10 committee that originally defined SCSI in to a broad set of capabilities that are used across the industry, by a broad group of organizations. But, that all said, scanners and printers are still on USB, and SCSI is almost all about storage in one form or another.

Q. How does iSCSI support software-defined storage?

A. Answered during the talk. SDS provides more automation and knobs on the storage capabilities. But SDS still needs a way to transport the storage and iSCSI works perfectly fine for that. They are complementary technologies, not competing.

Q. With 40Gb and faster coming soon to a server near you, what kind of impact will that have on CPU utilization? Will smaller servers be able to push that much traffic?

A. More throughput simply requires more CPU. With good multithreaded drivers available, this can mean simply adding cores to keep the pipe as full as possible. As we mentioned near the end, using iSCSI with RDMA lightens the load on the CPU even more, so you’ll probably be seeing more of that.

Q. Is IPSec commonly supported on iSCSI targets?  

A. Yes, IPsec is required to be implemented on an iSCSI target to be a compliant device.  However, it is not commonly enabled by customers. If they MUST provide IPsec there are a lot of non-compliant initiators and targets on the market.

Q. I’m told direct connect with iSCSI is discouraged, that there should be a switch in place to handle the buffering, latency, acknowledgement etc….. Is this true or a best practice to make sure switches are part of the design?

A. If you have no need to connect to multiple targets or multiple initiators, there’s no harm in direct connections.

Q. Ethernet was not designed to support storage traffic. The TCP/IP protocol suite was not designed to support storage traffic. SCSI was not designed to be encapsulated. So TCP/IP FTW? I think not. The reason iSCSI is exists is [perceived] cost savings. I get fed up with people constantly looking for ways to squeeze another penny out of something. To me it illustrates that they’re not very creative. Fibre Channel is a stupid name, but it is a purpose built protocol that works as designed to.

A. Ethernet is a general purpose network. It is capable of handling lots of different traffic (including storage). By putting iSCSI onto an existing Ethernet infrastructure, it can (as you point out) create a substantial cost savings over installing a FC network (although that infrastructure savings comes with other costs – such as the impact of a shared wire). However, installing a dedicated Ethernet network provides many of the advantages of a dedicated FC network, but at an added cost over that of a shared Ethernet infrastructure. While most consider FC a purpose-built storage network, it is worth pointing out that some also consider it a general purpose network (for example FC-Avionics is built into Fighter Jets, and it’s not for storage). And while not designed to be encapsulated, (it was designed for a parallel bus), SCSI today is encapsulated on every transport that carries it (yes, that includes FCP and SAS).
There are many kinds of storage at different price points, USB storage, SATA devices, rotating media (at different RPMs), SSD devices, SAS devices, FC devices, single spindles, arrays, cloud, drop boxes, etc., all with the corresponding transport wires. iSCSI is one of those wires. Each protocol and wire offer specific advantages and disadvantages.   There can be a lot of confusion about which to use, but just as everyone does not drive the same type car (a FORD FUSION for example), everyone does not need the same type of storage (FC devices/arrays). Yes, I drive a FORD FUSION, and I like FC storage, but I use a USB stick on my laptop, and I pray my bank never puts my financial records out in the cloud. Selecting the right storage (and wire) for the job at hand can be one of a system administrators most interesting problems to solve.As for the name – that is often what happens in committees…

Q. As a best practice for Windows servers, disable hardware acceleration features in NICs (TOE etc.)? Are any NIC features valuable given modern multicore CPUs?

A. Yes. Typically the only reason to disable TOE is that multiple or virtual TCP/IP stacks are going to be using the same NIC. TSO, LRO and jumbo frames will benefit any OS that can take advantage of them.

Q. What is the advantage of iSCSI when compared with NVMe?

A. NVMe and iSCSI are very different protocols. NVMe started life as a direct attach protocol to communicate to native PCIe devices (not even outside the box). iSCSI was a network protocol from day one. iSCSI has to deal with the potential for long network induced delays, and complex out of order error recovery issues. NVMe operates over an interlocked bus, and as such, does not have those issues.

But, NVMe is now being extended over fabrics. NVMe over a RoCE V1 transport will be a data center network (since there is no IP routing). NVMe over a RoCE V2 transport or an iWARP transport will have the same routing capabilities that iSCSI has.   When it comes to the raw command set, they are very similar (but there are some differences). SCSI is a more full featured command set than NVMe – it has been developed over a span of over 25 years, and has developed solutions for all the problems that have been discovered during that time span. NVMe has a more limited (or more focused) command set (for example, there are no tape commands in the NVMe command set). iSCSI is available today, as is direct attach NVMe, but NVMe over Fabrics is still in the development phases (the specification is expected to be available the first week of June, 2016). NVMe products will take some time to mature and to develop solutions for the problems they have not discovered yet. Another example of this is the ability to support shared storage – it existed on day one in iSCSI, but did not exist in the first NVMe specification. To support shared storage in NVMe over Fabrics, that capability has since been added, and it was done using a SCSI compatible method (to make it easier for host S/W that already performs this function).

There is a large community working to develop NVMe over Fabrics. As memory based storage device get cheaper, and the solution space matures, NVMe will become more attractive.

Q. How often do iSCSI installations provide encryption of data in flight? How: IPsec, IKEv2-SCSI + ESP-SCSI, etc.?

A. Rarely. More often than not, if in-flight data security is needed, it will be run on an isolated network. Well under 100% of installations are 100% compliant.  VMware never qualified IPsec with iSCSI and didn’t have any obvious switch to turn it on. Side note: We standards guys can be overly picky about words.  Since the question is “provide” the answer is – 100% of compliant installations PROVIDE encryption (IPsec V2 – see above), however, in practice, installations that require that type of security typically run on isolated networks, rather than turn on encryption.

Q. How do multiple independent applications inside the same initiator map to iSCSI sessions to the same target? E.g., iSCSI session one-to-one with application?

A. There is no relationship between applications and sessions. When an iSCSI initiator discovers a target, the initiator logs in and establishes a session. If iSCSI MCS (multi connection session) is being used, multiple TCP connections may be established and used in parallel to process operations for that session.

Applications send reads and writes to the operating system. Those IO requests make their way through the file system and caching layers into the device driver. The device driver issues the IO request to the device (over the iSCSI session) and retains information about that IO. When a completion is received from a device (the WRITE command or READ command completed), it is matched up with the request. That completion status (success or error) is passed back through the operating system (file system, etc.) to the application. So it is the responsibility of the device driver to mux/demux the requests from all the applications out over the iSCSI session and track the responses as the operations are completed.

When an operating system is using MPIO (multi-pathing), then the device driver may create multiple sessions between the initiator and the target. This is where operating system MPIO policies such as round-robin, shortest queue, LRU, etc. come into play. In this case, the MPIO driver will send an IO operation to the device using what it considers to be the most appropriate path (based on the selected policy). But again, there is no relationship between the application and the path used for IO (any application can have it’s IO send via any path).

Today, MPIO is used more commonly than MCS.

Q. Will Microsoft iSCSI implement iSER?

A. This is a question for Microsoft or iSER-capable NIC vendor that provides Microsoft drivers.

Q.Zadara has some iSER deployments using Linux and VMware clients going to the Zadara cloud storage.

A. There’s an answer, all by itself.

Q. In the case of iWARP, the TCP layer takes care of out-of-order IP packet receptions. What layer does the out-of-order management of packets in ROCE ?

A. RoCE headers contain a 24 bit “Packet Sequence Number” that is used to validate the required ordering and detect lost packets. As such, ordering still occurs, just in a different way.

Q. Correction: RoCE is over Ethernet packets and is not routable. RoCEv2 is the one over UDP/IP and *is* routable.

A. You are correct. RoCE is not routable by IP. RoCE transmits raw Ethernet frames with just Ethernet MAC headers and no IP headers, and as such, it is not routable by IP. RoCE V2 puts the information into UDP packets (with appropriate IP headers), and therefore it is routable by IP.

Q. How prevalent is iSER today in deployment? And what are some of the typical applications that leverage iSER?

A. Not terribly prevalent today, but higher speed Ethernet might drive more adoption, due to the CPU savings demonstrated.

 

 

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Use Cases for iSCSI and FCoE – Your Questions Answered

Jeff Asher

Mar 11, 2014

title of post

We had a tremendous response to our recent Webcast “Use Cases for iSCSI and FCoE – Where Each Makes Sense.” We had a lot of questions that we didn’t have time to address, so here are answers to them all. If you think of additional questions, please feel free to comment on this blog.

Q. You stated that FCoE requires End to End DCB connectivity.  That is not entirely true if you have native Fibre Channel storage. 

Once native FC is added, it is a hybrid FCoE/native FC network, not a simple FCoE network.  To be clearer I could’ve stated that for FCoE all Ethernet links traversed must be DCB enabled.

Q. Any impact on the protocol choice if you bring SDN solutions with overlay networks using VXLAN or NVGRE within virtual switching in hypervisors into the picture?

An excellent question, but complicated enough that it probably deserves a discussion on its own.  Overlay networks encapsulate Ethernet frames into routable packets.  On a view of strict adherence to ISO ordering, that means L2 constructs like Data Center Bridging become “invisible” until decap.  You lose the “lossless,” low-latency that FCoE expects and iSCSI may be taking advantage of, depending on your implementation.  That doesn’t really favor one protocol over the other, but FCoE may lose advantages it has over iSCSI when confined to a single L2 subnet.  But, unfortunately, the real answer to your question requires that you investigate in detail how the system software you are using handles encapsulated storage packets for both block storage protocols.  Microsoft’s Hyper-V is different from VMware’s vSphere, and each flavor of SDN could be different as well.  Proceed with caution.

Q. Have you heard of any enterprise customers who are interested in NIC Partitioning to separate iSCSI, FCoE, and typical network traffic?  If so, can you provide information about those customers’ use cases?

We have not come across many customers that are interested in large-scale deployments yet.

Q. What are the use cases for using standalone FCoE switches in SAN keeping aside Cisco UCS and Blade Servers?

There are two ways to look at this:

1) To use FCoE as an end-to-end (Initiating server to target storage array) solution instead of, or to replace, Fibre Channel. Although, not very prevalent to date, the reason this option is chosen is to create a single converged LAN/SAN network that essentially retains the native FC constructs. The potential benefit would be in reduction in the amount of equipment required and the resources needed to deploy and administer two separate networks. This can be done in a phased approach, that uses multiprotocol switches, able to be used as Ethernet, FC or both on every port.  This will provide future proofing, reduced qualification costs, and lower OPEX by no longer requiring the purchase of multiple switches of different protocols.

2) To continue the use of FC for connectivity from the Top of Rack switch to the storage arrays, but use FCoE connectivity for server access. This is much more prevalent, and even when deployed outside of the Cisco UCS blade servers, is used to increase flexibility in highly virtualized server environments or multi-tenancy, where workloads/VMs from the same physical servers need to connect to different storage types.

Q. How do iSCSI and FCoE switches handle redundancy?  With FC, it is a best practice to implement dual fabrics with each storage system and server with paths down each.

Physical topology can be identical.  A storage system has one set of targets (either IP addresses or FCoE targets) on one switch and other targets on the other switch.  The initiators are configured to see any targets available on that leg.

To prevent Ethernet broadcast storms, technologies like per VLAN Spanning Tree and link aggregation are used.  TRILL can also be used.  For more details, I recommend reading this blog post by J Metz of Cisco.  http://blogs.cisco.com/datacenter/understanding-fcoe-and-trill-the-easy-way/

Q. Doesn’t increasing CPU mean software processing for FCoE and iSCSI at both endpoints can reduce costs considerably (i.e. no full HBA functionality needed at the endpoints)?

Absolutely.  If you have CPU cycles to spare at both endpoints, there is no reason to take on the extra cost of offload.  However, remember the principle behind Moore’s law also works on things like network adapters and HBAs.  It isn’t unreasonable to think that full offload capabilities will be included by default in a few years as technology progresses.  And even if they aren’t, the actual application of Moore’s law will push the difference in CPU utilization to be trivial.

Q. How do large data centers configure and manage iSCSI?  Is it by configuring the initiators and targets? My understanding is that most installations don’t use iSNS.  Is this true?

It is true that most implementations of iSCSI don’t use iSNS.  iSCSI initiators are simply configured with the target address by the administrator.  In the FC world, SNS is simply there, but the iSCSI equivalent, iSNS, has always been optional.  (SNS stands for Simple Name Service.  It is a service that helps initiators find targets.)

Q. I have been doing a lot of testing to compare iSCSI to FC and noticed that as we move from traditional storage to SSD-based storage the IOPS increase faster for FCoE. For example, 18K+ for FCoE vs. 12K for iSCSI. Have you seen similar results?

I have seen some similar results. However, I’ve also seen some that don’t necessarily line up with that.  I haven’t had the time to research this topic.  Sounds like a good topic for a future post.

Q. Do you have any information about the number of customers who use FCoE Boot and iSCSI Boot?

Unfortunately I don’t.  I do have anecdotal evidence to support customers using full-offload are more likely to boot from SAN.  Since more full-offload FCoE adapters are in use that full-offload iSCSI adapters today, it makes sense that more are booting over FCoE than iSCSI, but again, I don’t have any evidence to support that.

Q. What about iSCSI over RoCE?

There are three network/fabric technologies that use RDMA: InfiniBand, iWARP, and RoCE.  You can run iSCSI over any of these using the open-source iSER code supported by the Open Fabrics Alliance (https://www.openfabrics.org ).  iSER has been written to OFA’s “verbs” for RDMA (rather than to the more familiar “sockets).  However, note that of these three underlying transports, only iWARP is truly routable in general.  So technically you could implement iSER on InfiniBand or RoCE but it may not do for you what you expect iSCSI to do for you, i.e., go anywhere the internet goes.

Q. How does FCIP compare with iSCSI for long distance requirements?

FC networks rely on guaranteed packet delivery to deliver low latency, predictable performance. IP networks are a best effort network allowing for dropped packets with transmission retries. Given the possibility of latency loss, FCIP has experienced limited adoption. Useful where required. But, typically not a core part of infrastructure. If cost is a concern and long distance is required as part of the solution, then iSCSI is the better choice as it designed to allow for lossy networks. 

Q. Slide 22 – Was that hardware based iSCSI or software based iSCSI?

What was shown in the chart was software-based iSCSI, however you would see similar results with hardware-based iSCSI.

Q. What about FC vs FCoE performance? Any numbers?

Both Fibre Channel and FCoE can achieve line rate.  Here’s an example of testing Yahoo! did on an 8Gb FC HBA and a 10 GbE CNA that showed exactly that result: http://www.intel.com/content/www/us/en/network-adapters/10-gigabit-network-adapters/10-gbe-ethernet-yahoo-case-study.html .  So as Fibre Channel moves to 16 Gbps, it will outperform a 10GbE CNA, at least for peak performance.  However, the tables turn with a 40 GbE CNA, several of which are in production now.

Q. Do you see SR-IOV used currently or in the future to separate FCoE or iSCSI from standard LAN traffic?

So far we have seen that with the exception of a few operating systems (e.g., AIX), SR-IOV support today is network only.  Additionally, most customers want guaranteed bandwidth for storage and they wouldn’t be willing to run it on the same port as heavy NIC traffic.

Q. Are you aware of any FCoE targets for Windows?

I’m not aware of any right now.

Q. What is the max IOPS (at 4K) you can push thru 10G FCoE and iSCSI? Max latency (at 512 bytes)?

Latency is not determined by the pipe.

Q. Does FCoE really require a CNA? What about software only FCoE drivers?

Open FCoE does exist, but most FCoE implementations today use CNAs.  I do expect the adoption of FCoE software solutions to increase fairly substantially.  A lot of it comes down to the choice of booting via FCoE or another method.

Q. Do you think that the difference in FCoE/iSCSI usage for different App tiers can be related to the performance of the protocols?

Objectively, no.  Either protocol implemented can be configured to hit or exceed a performance number.  In my opinion, market perception of the protocols has more to do with the tier assignment than anything technical.

Q. Doesn’t 32 GbFC make it competitive with 40GbE FCoE?

From a purely technical perspective it helps, but FCoE is often deployed to reduce costs by simplifying cabling and switching by converging IP and storage onto the same fabric.  32Gb FC is slower than 40Gb and does nothing to reduce costs.  Unless 32Gb FC is significantly less expensive than 40 Gb Ethernet on a per port basis, market forces are going to push towards Ethernet.  There are still plenty of cases where organizations may deploy 32Gb FC instead of FCoE, but again, those criteria will mostly be non-technical.

Thanks to all my SNIA-ESF colleagues and Dell’Oro Group for helping me with these answers. If you missed the original Webcast, you can watch it on-demand here. You can also download a copy of the slides.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Why the FCoE – iSCSI Debate Continues

Jeff Asher

Feb 11, 2014

title of post

Why the FCoE – iSCSI Debate Continues

This is my first blog post for SNIA-ESF.  As a Principal Storage Architect, I have been doing extensive research on the factors that are driving FCoE vs. iSCSI choices over the last several years. The more I dive into the topic, the more intriguing the debate becomes. In fact, this blog is a preview of an upcoming white paper I’m writing and a Webcast SNIA is hosting on February 18th. If you agree this debate is interesting, I encourage you to attend. Details on the Webcast are at the end of this post.

A Look Back at FCoE and iSCSI History

There are two entrenched standards for block storage protocols over Ethernet networks.  FCoE was ratified in 2009, while iSCSI was ratified in 2004.  Of course, various vendors and early adopters supported these protocols before ratification, so the history of these protocols is a couple of years longer than it looks, respectively.  While iSCSI simply encapsulates the SCSI protocol in IP, FCoE operates lower in the network stack and to do so required many enhancements to Ethernet.  While iSCSI runs on any IP network (mostly Ethernet these days), FCoE requires Data Center Bridging and Converged Network Adapters all running at 10 Gbps or faster.

All of the Data Center Bridging enhancements that make FCoE possible, like lossless Ethernet, benefit all of the protocols using Ethernet as the transport protocol.  DCB doesn’t just make FCoE possible, but it improves iSCSI at the same time  (see the SNIA-ESF blog, How DCB Makes iSCSI Better). So given that modern servers, networks, and storage may all be connected by hardware capable of running FCoE, that same network is also able to run iSCSI, as well as other network traffic.  Nothing precludes them from running simultaneously on the same network either.  The leading storage vendors that offer both FCoE and iSCSI target systems allow administrators to present the same LUN over either protocol with little effort, so a transition from one protocol to the other is not difficult.

Strengths and Weaknesses

So which network protocol is the right choice?

Each protocol has strengths and weaknesses when judged relative to each other.  FCoE has higher throughput at lower host CPU utilization than iSCSI and FCoE doesn’t have to process the TCP/IP stack as iSCSI does. iSCSI is relatively simple to setup and troubleshoot when compared to FCoE because zoning is not a factor and IP connectivity (although not optimized for storage traffic) is likely in place already.  Also, while FCoE has a comprehensive set of existing tools available to ease troubleshooting, there aren’t as many qualified people to use them in most enterprises.  Ease of use, plus the ability to use low cost NICs and switches, gives iSCSI a cost advantage.  (However, if you check out our SNIA-ESF webcast, “How VN2VN Will Help Accelerate Adoption of FCoE,” you’ll hear about new technologies that reduce the costs of deploying FCoE.) FC, and by extension FCoE, are perceived to be enterprise-grade, suitable for all workloads; and while iSCSI is being widely adopted at the enterprise level, it is still perceived by some not to be ready for Tier-1 applications.  The graph below is excerpted from the report “Intel 10GbE Adapter Performance Evaluation” prepared by Demartek for Intel in September 2010.  This data is consistent with the rest of the report findings and is only intended to be representative of the results from comparative iSCSI and FCoE testing.  The report is interesting reading and I recommend looking at it for more information. This graph shows IOPS and CPU utilization for JetStress tests running against NetApp storage over multi-path iSCSI and FCoE.  Note that latencies were all similar and running the tests against EMC storage showed similar results.

FCoE-iSCSI_Data

Many other factors must be considered, but according to industry pundits- as well as my own personal experience – in the majority of cases either protocol is adequate for the task at hand, and that is to effectively transfer block data across an Ethernet network.

Maximizing Throughput

The reality is, most servers, applications, and storage arrays simply won’t take advantage of FCoE’s superior performance or any storage protocol running over 10GbE.  iSCSI and NAS protocols are very fast and are typically sufficient to meet most application requirements.  But this is not meant to be a SAN vs NAS post – besides years of history, thousands of happy end users, and billions of continued investment show that both work well enough to meet most business needs.  The commonly deployed storage systems and hosts are simply not configured with enough hardware to saturate multiple 10 gigabit network links.  While this is rare today, it is going to become more common to see systems capable of saturating 10GbE pipes in the near future, especially as flash memory, either in all-flash arrays or tiered storage systems, find more application.  (Hear more on the impact of flash in our SNIA-ESF webcast, “Flash – Plan for the Disruption”). At least as it relates to spinning media disk systems – network bandwidth increases faster than storage system throughput can keep up.  So consider the storage system to be the bottleneck or limiting factor when evaluating storage network performance.  After all, in most data center environments, the ratio of servers and applications to storage systems is high. So, it’s reasonable to expect the storage system to be the bottleneck.  The absolute throughput of FCoE and iSCSI, when pushing a storage system to its limits, is not sufficient alone to be used as the sole basis for the decision between the two protocols except, for a few edge cases.  Bottom line: Whether the storage system is the bottleneck or the network is the bottleneck the performance relationship between FCoE and iSCSI does not change.

These edge cases tend to be extremely IO intensive database workloads and big data applications, such as Hadoop.  Citing the graph above, FCoE is about 15-20% faster on identical hardware than iSCSI.  Granted this is a single graph of a single test, but the data is consistent across tests performed by IBM using Emulex network interfaces.  If absolute throughput and efficiency (both network and CPU) are the only criteria when deciding between block protocols, FCoE looks like the choice.  Since these cases are rare – because complexity, supportability, and even politics are almost always considered – the decision is not so obvious.  Again, beyond the scope of this article, NAS protocols should be considered when determining the proper protocol for an application also.

Is There a Clear Winner?

While FCoE can claim technical superiority, iSCSI has the edge in cost and supportability.  The number and range of systems supporting iSCSI connectivity is greater, particularly at the entry level.  What’s more, the availability of people that can troubleshoot end-to-end connectivity for iSCSI is also much greater.  (The “ping” command diagnoses most iSCSI connectivity problems.)  Also, do a resume search on Monster or LinkedIn and the number of people that can configure VLANs dwarfs the number that can properly zone a Fibre Channel network.  Greater familiarity reduces the support and operating cost of iSCSI.

IDC predicts that FCoE revenue will ramp very quickly through 2016. (If available to you, see the IDC Worldwide Enterprise Storage Systems 2012-2016 Forecast Update.)  As customers decide to transition existing Fibre Channel networks to an Ethernet infrastructure, deploying FCoE would be a comfortable choice due to existing IT expertise and functional expectations of the Fibre Channel protocol.

Both iSCSI and FCoE are capable storage protocols and choosing one over the other will likely be dependent upon budget, IT skill set, and application requirements

Don’t forget to join us on Feb. 18th

Again, I encourage you to attend our February 18th Webcast, “Use Cases for iSCSI and FCoE –Where Each Makes Sense.”  Analysts from Dell’Oro Group will share their latest market research on this topic and I’ll dive into use cases for both iSCSI and FCoE. It’s a live event, so please come with your toughest questions. I hope you’ll join us!

 

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

2013 in Review and the Outlook for 2014 – A SNIA ESF Perspective

Jason Blosil

Jan 28, 2014

title of post

Technology continues to advance rapidly. Making sense of it all can be a challenge. At the SNIA Ethernet Storage Forum, we focus on storage technologies and solutions enabled by and associated with Ethernet Networks. Last year, we modified the charters of our two Special Interest Groups (SIG) to address topics about file protocols and storage over Ethernet. The File Protocols SIG includes the prior focus on Network File System (NFS) related topics and adds discussions around Server Message Block (SMB / CIFS). We had our first webcast last November on the topic of SMB 3.0 and it was our best attended webcast ever. The Storage over Ethernet SIG focuses on general Ethernet storage topics as well as more information about technologies like FCoE, iSCSI, Data Center Bridging, and virtual networking for storage. I encourage you to check out other articles on these hot topics in this SNIAESF blog to hear from our member experts as well as guest posts from leading analysts.

2013 was a busy year and we are already kickin’ it in 2014. This should be an exciting year in IT. Data storage continues to be a hot sector especially in the areas of All-Flash and Hybrid arrays. This year, we will expect to see new standards coming out of the T11 committee for Fibre Channel and possibly FCoE as well as progress in high speed Ethernet networks. Lower cost network interconnects will facilitate adoption of high speed networks in the small to midsize business segment. And a new conversation around “Software Defined…” should push a lot of ink in trade rags and other news sources. Oh, and don’t forget about the “Internet of Things”, mobile solutions, and all things Cloud.

The ESF will be addressing the impact on Ethernet storage solutions from these hot technologies. Next month, on February 18th, experts from the ESF, along with industry analysts from Dell’Oro Group will speak to the benefits and best practices of deploying FCoE and iSCSI storage protocols. This presentation “Use Cases for iSCSI and Fibre Channel: Where Each Makes Sense” will be part of an upcoming BrightTalk Summit on Storage Networking. I encourage you to register for this session. Additionally, we will be publishing a couple of white papers on file-based storage and a review of FCoE and iSCSI in storage applications.

Finally, SNIA will be kicking off its first year of the new user conference, Data Storage Innovation Conference. This will be one of the few storage focused user conferences in the market and should be quite interesting.

We’re excited about our growing membership and our plans for 2014. Our goal is to advance application of innovative technologies and we encourage you to send us mail or comment below with topics that are of interest to you.

Here’s to an exciting 2014!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Fibre Channel over Ethernet (FCoE): Hype vs. Reality

title of post

It’s been a bit of a bumpy ride for FCoE, which started out with more promise than it was able to deliver. In theory, the benefits of a single converged LAN/SAN network are fairly easy to see. The problem was, as is often the case with new technology, that most of the theoretical benefit was not available on the initial product release. The idea that storage traffic was no longer confined to expensive SANs, but instead could run on the more commoditized and easier-to-administer IP equipment was intriguing, however, new 10 Gbps Enhanced Ethernet switches were not exactly inexpensive with few products supporting FCoE initially, and those that did, did not play nicely with products from other vendors.

Keeping FCoE “On the Single-Hop”?

The adoption of FCoE to date has been almost exclusively “single-hop”, meaning that FCoE is being deployed to provide connectivity between the server and the Top of Rack switch. Consequently, traffic continues to be broken out one way for IP, and another way for FC. Breaking out the traffic makes sense—by consolidating network adapters and cables, it adds value on the server access side.

A significant portion of FCoE switch ports come from Cisco’s UCS platform, which runs FCoE inside the chassis. In terms of a complete end-to-end FCoE solution, there continues to be very little multi-hop FCoE happening, or ports shipping on storage arrays.

In addition, FCoE connections are more prevalent on blade servers than on stand-alone servers for various reasons.

  • First, blades are used more in a virtualized environment where different types of traffic can travel on the same link.
  • Second, the migration to 10 Gbps has been very slow so far on stand-alone servers; about 80% of these servers are actually still connected with 1 Gbps, which cannot support FCoE.

What portion of FCoE-enabled server ports are actually running storage traffic?

FCoE-enabled ports comprise about a third of total 10 Gbps controller and adapter ports shipped on servers. However, we would like to bring to readers’ attention the wide difference between the portion of 10 Gbps ports that is FCoE-enabled and the portion that is actually running storage traffic. We currently believe less than a third of the FCoE-enabled ports are being used to carry storage traffic. That’s because the FCoE port, in many cases, is provided by default with the server. That’s the case with HP blade servers as well as Cisco’s UCS servers, which together are responsible for around 80% of the FCoE-enabled ports. We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic—but they will need to pay an additional premium for this – about 50% to 100% – for the FCoE license.

The Outlook

That said, whether FCoE-enabled ports are used to carry storage traffic or not, we believe they are being introduced at the expense of some FC adapters. If users deploy a server with an FCoE-enabled port, they most likely will not buy a FC adapter to carry storage traffic. Additionally, as Ethernet speeds reach 40 Gbps, the differential over FC will be too great and FC will be less likely to keep pace.

About the Authors

Casey Quillin is a Senior Analyst, Storage Area Network & Data Center Appliance Market Research with the Dell’Oro Group

Sameh Boujelbene is a Senior Analyst, Server and Controller & Adapter Market Research with the Dell’Oro Group

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Fibre Channel