Q&A Summary from the SNIA-ESF Webcast – “How VN2VN Will Help Accelerate Adoption of FCoE”

David Fair

Aug 29, 2013

title of post

Our VN2VN Webcast last week was extremely well received. The audience was big and highly engaged. Here is a summary of the questions attendees asked and answers from my colleague, Joe White, and me. If you missed the Webcast, it’s now available on demand.

Question #1:

We are an extremely large FC shop with well over 50K native FC ports. We are looking to bridge this to the FCoE environment for the future. What does VN2VN buy the larger company? Seems like SMB is a much better target for this.

Answer #1: It’s true that for large port count SAN deployments VN2VN is not the best choice but the split is not strictly along the SMB/large enterprise lines. Many enterprises have multiple smaller special purpose SANs or satellite sites with small SANs and VN2VN can be a good choice for those parts of a large enterprise. Also, VN2VN can be used in conjunction with VN2VF to provide high-performance local storage, as we described in the webcast.

Question #2: Are there products available today that incorporate VN2VN in switches and storage targets?

Answer #2: Yes. A major storage vendor announced support for VN2VN at Interop Las Vegas 2013. As for switches, any switch supporting Data Center Bridging (DCB) will work. Most, if not all, new datacenter switches support DCB today. Recommended also is support in the switch for FIP Snooping, which is also available today.

Question #3: If we have an iSNS kind of service for VN2VN, do you think VN2VN can scale beyond the current anticipated limit?

Answer #3: That is certainly possible. This sort of central service does not exist today for VN2VN and is not part of the T11 specifications so we are talking in principle here. If you follow SDN (Software Defined Networking) ideas and thinking then having each endpoint configured through interaction with a central service would allow for very large potential scaling. Now the size and bandwidth of the L2 (local Ethernet) domain may restrict you, but fabric and distributed switch implementations with large flat L2 can remove that limitation as well.

Question #4: Since VN2VN uses different FIP messages to do login, a unique FSB implementation must be provided to install ACLs. Have any switch vendors announced support for a VN2VN FSB?

Answer #4: Yes, VN2VN FIP Snooping bridges will exist. It only requires a small addition to the filet/ACL rules on the FSB Ethernet switch to cover VN2VN. Small software changes are needed to cover the slightly different information, but the same logic and interfaces within the switch can be used, and the way the ACLs are programmed are the same.

Question #5: Broadcasts are a classic limiter in Layer 2 Ethernet scalability. VN2VN control is very broadcast intensive, on the default or control plane VLAN. What is the scale of a data center (or at least data center fault containment domain) in which VN2VN would be reliably usable, even assuming an arbitrarily large number of data plane VLANs? Is there a way to isolate the control plane broadcast traffic on a hierarchy of VLANs as well?

Answer #5: VLANs are an integral part of VN2VN within the T11 FC-BB-6 specification. You can configure the endpoints (servers and storage) to do all discovery on a particular VLAN or set of VLANs. You can use VLAN discovery for some endpoints (mostly envisioned as servers) to learn the VLANs on which to do discovery from other endpoints (mostly envisioned as storage). The use of VLANs in this manner will contain the FIP broadcasts to the FCoE dedicated VLANs. VN2VN is envisioned initially as enabling small to medium SANs of about a couple hundred ports although in principle the addressing combined with login controls allows for much larger scaling.

Question #6: Please explain difference between VN2VN and VN2VF

Answer #6: The currently deployed version of FCoE, T11 FC-BB-5, requires that every endpoint, or Enode in FC-speak, connect with the “fabric,” a Fibre Channel Forwarder (FCF) more specifically. That’s VN2VF. What FC-BB-6 adds is the capability for an endpoint to connect directly to other endpoints without an FCF between them. That’s VN2VN.

Question #7: In the context of VN2VN, do you think it places a stronger demand for QCN to be implemented by storage devices now that they are directly (logically) connected end-to-end?

Answer #7: The QCN story is the same for VN2VN, VN2VF, I/O consolidation using an NPIV FCoE-FC gateway, and even high-rate iSCSI. Once the discovery completes and sessions (FLOGI + PLOGI/PRLI) are setup, we are dealing with the inherent traffic pattern of the applications and storage.

Question #8: Your analogy that VN2VN is like private loop is interesting. But it does make VN2VN sound like a backward step – people stopped deploying AL tech years ago (for good reasons of scalability etc.). So isn’t this just a way for vendors to save development effort on writing a full FCF for FCoE switches?

Answer #8: This is a logical private loop with a lossless packet switched network for connectivity. The biggest issue in the past with private or public loop was sharing a single fiber across many devices. The bandwidth demands and latency demands were just too intense for loop to keep up. The idea of many devices addressed in a local manner was actually fairly attractive to some deployments.

Question #9: What is the sweet spot for VN2VN deployment, considering iSCSI allows direct initiator and target connections, and most networks are IP-enabled?

Answer #9: The sweet spot if VN2VN FCoE is SMB or dedicated SAN deployments where FC-like flow control and data flow are needed for up to a couple hundred ports. You could implement using iSCSI with PFC flow control but if TCP/IP is not needed due to PFC lossless priorities — why pay the TCP/IP processing overhead? In addition the FC encapsulation/serializaition and FC exchange protocols and models are preserved if this is important or useful to the applications. The configuration and operations of a local SAN using the two models is comparable.

Question #10: Has iSCSI become irrelevant?

Answer #10: Not at all. iSCSI serves a slightly different purpose from FCoE (including VN2VN). iSCSI allows connection across any IP network, and due to TCP/IP you have an end-to-end lossless in-order delivery of data. The drawback is that for high loss rates, burst drops, heavy congestion the TCP/IP performance will suffer due to congestion avoidance and retransmission timeouts (‘slow starts’). So the choice really depends on the data flow characteristics you are looking for and there is not a one size fits all answer.

Question #11: Where can I watch this Webcast?

Answer #11: The Webcast is available on demand on the SNIA website here.

Question #12: Can I get a copy of these slides?

Answer #12: Yes, the slides are available on the SNIA website here.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ethernet is the right fit for the Software Defined Data Center

Jason Blosil

Aug 12, 2013

title of post
"Software Defined" is a  label being used to define advances in  network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn't a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I'll focus on that portion of the infrastructure here. VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical  server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware. YesteryearThe challenge, however, is that these applications aren't stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don't have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of "innies" and "outies". Gone are the days of "set it and forget it" in terms of networking devices. Or is it? Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way. network virtualizationSoftware defined networks aren't limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment. Ubiquitous Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet. Compatibility Ethernet has been around for so long and has proven to "just work." Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features,  and support plans. But, that might change with advances in SDN. Highly Scalable Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the "internet of things" period in IT history, we will not lack for network scale. At least, in theory. Overlay Networks Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review  by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2  and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies. Unified Protocol Access Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes. Virtualization Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online. Roadmap For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments. Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Resolving the Confusion around DCB (I Hope)

Simon Gordon

Apr 15, 2013

title of post
Storage traffic running over Ethernet-based networks has been around for as long as we have had Ethernet-based networks.   Of course sometimes it is technically not accurate to think of the protocols as fundamentally Ethernet protocols – whilst FCoE, by definition, only runs on Ethernet, iSCSI, SMB, and NFS,  they are, in reality, IP-based storage protocols and whilst most commonly run on Ethernet, could run on any network that supports IP.   That notwithstanding, it is increasingly important to understand the real nature of Ethernet, and in particular, the nature of the new enhancements that we put under the umbrella of Data Center Bridging (DCB). Although there is a great deal of information around DCB, there is also a lot of confusion where even the best articles miss describing a number of its elements.   As such, with 10GbE ramping now is a good time to try to clarify the reality of what DCB does and does not do. Perhaps the first and most important point is that DCB is, in reality, a task group in IEEE responsible for the development enhancements to the 802.1 bridge specifications that apply specifically to Ethernet Switching (or as IEEE say Bridging) in the datacenter.   As such, DCB is not in itself a standard nor is the DCB group solely involved in those standards that apply to I/O and network convergence.   The mots recent work of this task group falls into two distinct areas both of which apply to the datacenter ­ one is the now completed standards around network and I/O convergence (802.1Qau,Qaz, Qbb) but the other are those standards that address the impact of server virtualization technology (802.1Qbg, BR, and the now withdrawn Qbh).   Protocol Tree Also critical to understand, so that we do not overstate either the limitations of traditional Ethernet nor the advantages of the new standards around I/O and network convergence, is that these new standards build on top of many well understood, well used mature capabilities that already exist within the IEEE Ethernet standards set.   Indeed IMHO, the most important element of this is that the DCB Convergence standards are building on top of the 802.1p capability to specify eight different classes of service through a 3-bit PCP field in the 802.1Q header ­ the VLAN header.   Or to say that in English ­ Ethernet has for some considerable time had the ability to separate traffic into 8 separate categories to ensure that those different categories got different treatment - or more bluntly the fundamentals of I/O and network convergence are nothing new to Ethernet.   Not only that, but the VLAN identification itself can be used to apply QoS on different sets of traffic, as does the fact that we can usually identify different traffic types by the Ethertype or IP socket. So what is all the fuss about? As much as there were some good convergence capabilities, it was recognized that these could be further enhanced. 802.1Qbb or Priority-based Flow Control (PFC), far from adding into Ethernet a non-existent capability for lossless, simply takes the existing capability for lossless ­ 802.3X ­ and enhances it.   802.3X, when deployed with both RX and TX pause, can give lossless Ethernet as both recognized by many in the iSCSI community as well as by the FCoE specifications.   However, the pause mechanisms apply at the port level, which means giving one traffic class lossless causes blocking of other traffic classes.   All 802.1Qbb does, along with 802.3bd, is allow the pause mechanisms to be applied individually to specific priorities or traffic classes ­ aka pause FCoE or iSCSI or RoCE whilst allowing other traffic to flow.   PFC 802.1Qaz or ETS (let's ignore that DCBX is also part of this document and discussed in another SNIA-ESF blog) is not bandwidth allocation to your individual priorities, rather it is the ability to create a group of priorities and apply bandwidth rules to that group.   In English, it adds a new tier to your QoS schedulers so you can now apply bandwidth rules to port, priority or class group, individual priority, and VLAN.   The standard suggests a practice of at least three groups, one for best effort traffic classes, one for PFC-lossless classes, and one for strict priority ­ though it does allow more groups.   ETS logical view Last but not least, 802.1Qau or QCN is not a mechanism to provide lossless capabilities.   Where pause and PFC are point-to-point flow control mechanisms QCN allows flow control to be applied by a message from the congestion point all the way back to the source.   Being Ethernet level mechanisms it is of course across multiple hops within a layer 2 domain and so cannot cross either IP routing or FCF FCoE-based forwarding.   If QCN is applied to a non PFC priority, then it would most likely reduce drop by telling the source device to slow down rather than having frames be dropped and allowing the TCP congestion window to trigger slowing down at the TCP level.   If QCN is applied to a PFC priority, then it could reduce back propagation of PFC pause and so congestion propagation within that priority.   QCN Although not part of the standards for DCB-based convergence, but mentioned in the standards, devices that implement DCB typically have some form of buffer carving or partitioning such that the different traffic classes are not just on different priorities or classes as they flow through the network, but are being queued in and utilizing separate buffer queues.   This is important as the separated queuing and buffer allocation is another aspect of how fate sharing is limited or avoided between the different traffic classes.   It also makes conversations around microbursts, burst absorption, and latency bubbles all far more complex than before when there was less or no buffer separation. It is important to remember that what we are describing here are the layer 2 Ethernet mechanisms around I/O and network convergence, QoS, flow control.   These are not the only tools available (or in operation) and any datacenter design needs to fully consider what is happening at every level of the network and server stack ­ including, but not limited to, the TCP/IP layer, SCSI layer, and indeed application layer.   The interactions between the layers are often very interesting ­ but that is perhaps the subject for another blog. In summary, with the set of enhanced convergence protocols now fully standardized and fairly commonly available on many platforms, along with the many capabilities that exist within Ethernet, and the increasing deployment of networks with 10GbE or above, more organizations are benefiting from convergence – but to do so they quickly find that they need to learn about aspects of Ethernet that in the past were perhaps of less interest in a non-converged world.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How DCB Makes iSCSI Better

Allen Ordoubadian

Mar 5, 2013

title of post
A challenge with traditional iSCSI deployments is the non-deterministic nature of Ethernet networks. When Ethernet networks only carried non-storage traffic, lost data packets where not a big issue as they would get retransmitted. However; as we layered storage traffic over Ethernet, lost data packets became a "no no" as storage traffic is not as forgiving as non-storage traffic and data retransmissions introduced I/O delays which are unacceptable to storage traffic. In addition, traditional Ethernet also had no mechanism to assign priorities to classes of I/O. Therefore a new solution was needed. Short of creating a separate Ethernet network to handle iSCSI storage traffic, Data Center Bridging (DCB), was that solution. The DCB standard is a key enabler of effectively deploying iSCSI over Ethernet infrastructure. The standard provides the framework for high-performance iSCSI deployments with key capabilities that include: - Priority Flow Control (PFC)—enables "lossless Ethernet", a consistent stream of data between servers and storage arrays. It basically prevents dropped frames and maximizes network efficiency. PFC also helps to optimize SCSI communication and minimizes the effects of TCP to make the iSCSI flow more reliably. - Quality of Service (QoS) and Enhanced Transmission Selection (ETS)—support protocol priorities and allocation of bandwidth for iSCSI and IP traffic. - Data Center Bridging Capabilities eXchange (DCBX) — enables automatic network-based configuration of key network and iSCSI parameters. With DCB, iSCSI traffic is more balanced over high-bandwidth 10GbE links. From an investment protection perspective, the ability to support iSCSI and LAN IP traffic over a common network makes it possible to consolidate iSCSI storage area networks with traditional IP LAN traffic networks. There is also another key component needed for iSCSI over DCB. This component is part of Data Center Bridging eXchange (DCBx) standard, and it's called TCP Application Type-Length-Value, or simply "TLV"! TLV allows the DCB infrastructure to apply unique ETS and PFC settings to specific sub-segments of the TCP/IP traffic. This is done through switches which can identify the sub-segments based on their TCP socket or port identifier which are included in the TCP/IP frame. In short, TLV directs servers to place iSCSI traffic on available PFC queues, which separates storage traffic from other IP traffic. PFC also eliminates data retransmission and supports a consistent data flow with low latency. IT administrators can leverage QoS and ETS to assign bandwidth and priority for iSCSI storage traffic, which is crucial to support critical applications. Therefore, depending on your overall datacenter environment, running iSCSI over DCB can improve: - Performance by insuring a consistent stream of data, resulting in "deterministic performance" and the elimination of packet loss that can cause high latency - Quality of service through allocation of bandwidth per protocol for better control of service levels within a converged network - Network convergence For more information on this topic or technologies discussed in this blog, please visit some of our other blog articles: - What Up with DCBX Blog  and iSCSI over DCB: Reliability and predictable performance  or check out the IEEE website on DCB

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How is 10GBASE-T Being Adopted and Deployed?

David Fair

Jan 8, 2013

title of post
For nearly a decade, the primary deployment of 10 Gigabit Ethernet (10GbE) has been using network interface cards (NICs) supporting enhanced Small Form-Factor Pluggable (SFP+) transceivers. The predominant transceivers for 10GbE are Direct Attach (DA) copper, short range optical (10GBASE-SR), and long-range optical (10GBASE-LR). The Direct Attach copper option is the least expensive of the three. However, its adoption has been hampered by two key limitations: - DA's range is limited to 7m, and - because of the SFP+ connector, it is not backward-compatible with existing 1GbE infrastructure using RJ-45 connectors and twisted-pair cabling. 10GBASE-T addresses both of these limitations. 10GBASE-T delivers 10GbE over Category 6, 6A, or 7 cabling terminated with RJ-45 jacks. It is backward-compatible with 1GbE and even 100 Megabit Ethernet. Cat 6A and 7 cables will support up to 100m. The advantages for deployment in an existing data center are obvious. Most existing data centers have already installed twisted pair cabling at Cat 6 rating or better. 10GBASE-T can be added incrementally to these data centers, either in new servers or via NIC upgrades "without forklifts." New 10GBASE-T ports will operate with all the existing Ethernet infrastructure in place. As switches get upgraded to 10GBASE-T at whatever pace, the only impact will be dramatically improved network bandwidth. Market adoption of 10GBASE-T accelerated sharply with the first single-chip 10GBASE-T controllers to hit production. This integration become possible because of Moore's Law advances in semiconductor technology, which also enabled the rise of dense commercial switches supporting 10GBASE-T. Integrating PHY and MAC on a single piece of silicon significantly reduced power consumption. This lower power consumption made fan-less 10GBASE-T NICs possible for the first time. Also, switches supporting 10GBASE-T are now available from Cisco, Dell, Arista, Extreme Networks, and others with more to come. You can see the early market impact single-chip 10GBASE-T had by mid-year 2012 in this analysis of shipments in numbers of server ports from Crehan Research:   [caption id="attachment_184" align="alignnone" width="262"] Server-class Adapter & LOM 10GBASE-T Shipments[/caption] Note, Crehan believes that by 2015, over 40% of all 10GbE adapters and controllers sold that year will be 10GBASE-T. Early concerns about the reliability and robustness of 10GBASE-T technology have all been addressed in the most recent silicon designs. 10GBASE-T meets all the bit-error rate (BER) requirements of all the Ethernet and storage over Ethernet specifications. As I addressed in an earlier SNIA-ESF blog, the storage networking market is a particularly conservative one. But there appear to be no technical reasons why 10GBASE-T cannot support NFS, iSCSI, and even FCoE. Today, Cisco is in production with a switch, the Nexus 5596T, and a fabric extender, the 2232TM-E that support "FCoE-ready" 10GBASE-T. It's coming – with all the cost of deployment benefits of 10GBASE-T. [poll id="4"] [poll id="5"] [poll id="6"]

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Live Webcast: 10GbE – Key Trends, Drivers and Predictions

Jason Blosil

Jul 12, 2012

title of post
The SNIA Ethernet Storage Forum (ESF) will be presenting a live Webcast on 10GbE on Thursday, July 19th.   Together with my SNIA colleagues, David Fair and Gary Gumanow, we'll be discussing the technical and economic justifications that will likely make 2012 the "breakout year" for 10GbE.  We'll cover the disruptive technologies moving this protocol forward and highlight the real-world benefits early adopters are seeing. I hope you will join us! The Webacast will begin at 8:00 a.m. PT/11:00 a.m. ET. Register Now: http://www.brighttalk.com/webcast/663/50385 This event is live, so please come armed with your questions. We'll answer as many as we can on the spot and include the full Q&A here in a SNIA ESF blog post. We look forward to seeing you on the 19th!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Impressions from Cisco Live 2012

Jason Blosil

Jul 9, 2012

title of post
I recently attended Cisco Live in San Diego last week and wanted to share some of my impressions of the show. First of all, the weather was a disappointment. I'm a native Californian (the northern state of course) and I was looking forward to some sweet weather instead of the cool overcast climate. It's been so nice in Boston, I have been spoiled. Attendance was huge. I heard something north of 17,000 attendees. I don't know if that was actual attendees or registrations. But, it was a significant number and I had several engaging conversations about data center trends, applications, as well as general storage inquiries with the attendees. Presenting at the Intel Booth My buddies at Intel asked to make a couple of presentations at their booth and I spoke on the current status of 10GbE adoption and the value it offers. My two presentations were in the morning of the first two full days of the show. Things didn't look good when only a few attendees were seated at the time we were about to start. My first impression seeing the empty seats in the theater was, "the Intel employees better make a great audience." Fortunately, the 20 or so seats filled just as I started with more visitors standing in the back and side. The number of attendees doubled the second day, so maybe I built a reputation.   Yeah, right. Anyway, let me share just a couple of the ideas from my presentation here: 1)           10GbE is an ideal network infrastructure that offers great flexibility and performance with the ability to support a variety of workloads and applications. For storage, both block and file based protocols are supported which is ideal for today's highly virtualized infrastructures. 2)           The ability to consolidate data traffic over a shared network promises significant capital and operational benefits for organizations currently supporting data centers with mixed network technologies. These benefits include fewer ports, cables, and components which mean less equipment to purchase, manage, power and cool. Goodness all around. 3)           There are a couple of applications in particular that are making 10GbE particularly useful.
  1. Virtualization – high VM density drives increase bandwidth requirements from server to storage
  2. Flash / SSD – flash memory drives increased performance at both the server and storage which requires increased bandwidth
After the presentation, I asked for questions and was pleased with the number and quality of questions. Sure, we were giving away swag (Intel t-shirts). But, the relevance of the questions was particularly interesting. Many customers were considering deploying converged networks or just moving to Ethernet from Fibre Channel infrastructures. Some of the questions included, where would you position iSCSI vs FCoE? What are the ideal use cases for each? When do you expect to see 40GbE or 100GbE and for what applications? What about other network technologies, such as Infiniband? Interestingly, very few if any were planning to move to 16Gb Fibre Channel. Now, this was a Cisco show, so I would expect attendees to be there because they favor Cisco's message and technology or are in the process of evaluating it. So, given Cisco's strength and investment in 10GbE, it shouldn't be a surprise that most attendees at the show, or at least my presentation, were leaning that direction. But, I didn't expect it to be so one sided. Conclusion Interest in vendor technology shows is clearly surpassing other industry events, and Cisco Live is no exception. And each Cisco Live event continues to reflect greater interest from customers in 10GbE in the datacenter.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Two Storage Trails on the 10GbE Convergence Path

SteveAbbott

Aug 8, 2011

title of post
As the migration to 10Gb Ethernet moves forward, many data centers are looking to converge network and storage I/O to fully utilize a ten-fold increase in bandwidth.   Industry discussions continue regarding the merits of 10GbE iSCSI and FCoE.  Some of the key benefits of both protocols were presented in an iSCSI SIG webcast that included Maziar Tamadon and Jason Blosil on July 19th: Two Storage Trails on the 10Gb Convergence Path It's a win-win solution as both technologies offer significant performance improvements and cost savings.   The discussion is sure to continue. Since there wasn't enough time to respond to all of the questions during the webcast, we have consolidated answers to all of them in this blog post from the presentation team.   Feel free to comment and provide your input. Question: How is multipathing changed or affected with FCoE? One of the benefits of FCoE is that it uses Fibre Channel in the upper layers of the software stack where multipathing is implemented.   As a result, multipathing is the same for Fibre Channel and FCoE. Question: Are the use of CNAs with FCoE offload getting any traction?   Are these economically viable? The adoption of FCoE has been slower than expected, but is gaining momentum.   Fibre Channel is typically used for mission-critical applications so data centers have been cautious about moving to new technologies.     FCoE and network convergence provide significant cost savings, so FCoE is economically viable. Question: If you run the software FCoE solution would this not prevent boot from SAN? Boot from SAN is not currently supported when using FCoE with a software initiator and NIC.   Today, boot from SAN is only supported using FCoE with a hardware converged networked adapter (CNA). Question:   How do you assign priority for FCoE vs. other network traffic.   Doesn't it still make sense to have a dedicated network for data intensive network use? Data Center Bridging (DCB) standards that enable FCoE allow priority and bandwidth to be assigned to each priority queue or link.     Each link may support one or more data traffic types. Support for this functionality is required between two end points in the fabric, such as between an initiator at the host with the first network connection at the top of rack switch, as an example. The DCBx Standard facilitates negotiation between devices to enable supported DCB capabilities at each end of the wire. Question:   Category 6A uses more power that twin-ax or OM3 cable infrastructures, which in large build-outs is significant. Category 6A does use more power than twin-ax or OM3 cables.   That is one of the trade-offs data centers should consider when evaluating 10GbE network options. Question: Don't most enterprise storage arrays support both iSCSI and FC/FCoE ports?   That seems to make the "either/or" approach to measuring uptake moot. Many storage arrays today support either the iSCSI or FC storage network protocol. Some arrays support both at the same time. Very few support FCoE. And some others support a mixture of file and block storage protocols, often called Unified Storage. But, concurrent support for FC/FCoE and iSCSI on the same array is not universal. Regardless, storage administrators will typically favor a specific storage protocol based upon their acquired skill sets and application requirements. This is especially true with block storage protocols since the underlying hardware is unique (FC, Ethernet, or even Infiniband). With the introduction of data center bridging and FCoE, storage administrators can deploy a single physical infrastructure to support the variety of application requirements of their organization. Protocol attach rates will likely prove less interesting as more vendors begin to offer solutions supporting full network convergence. Question: I am wondering what is the sample size of your poll results, how many people voted? We had over 60 live viewers of the webcast and over 50% of them participated in the online questions. So, the sample size was about 30+ individuals. Question: Tape? Isn't tape dead? Tape as a backup methodology is definitely on the downward slope of its life than it was 5 or 10 years ago, but it still has a pulse. Expectations are that disk based backup, DR, and archive solutions will be common practice in the near future. But, many companies still use tape for archival storage. Like most declining technologies, tape will likely have a long tail as companies continue to modify their IT infrastructure and business practices to take advantage of newer methods of data retention. Question: Do you not think 10 Gbps will fall off after 2015 as the adoption of 40 Gbps to blade enclosures will start to take off in 2012? 10GbE was expected to ramp much faster than what we have witnessed. Early applications of 10GbE in storage were introduced as early as 2006. Yet, we are only now beginning to see more broad adoption of 10GbE. The use of LOM and 10GBaseT will accelerate the use of 10GbE. Early server adoption of 40GbE will likely be with blades. However, recognize that rack servers still outsell blades by a pretty large margin. As a result, 10GbE will continue to grow in adoption through 2015 and perhaps 2016. 40GbE will become very useful to reduce port count, especially at bandwidth aggregation points, such as inter-switch links. 40Gb ports may also be used to save on port count with the use of fanout cables (4x10Gb). However, server performance must continue to increase in order to be able to drive 40Gb pipes. Question: Will you be making these slides available for download? These slides are available for download at www.snia.org/? Question: What is your impression of how convergence will change data center expertise?   That is, who manages the converged network?   Your storage experts, your network experts, someone new? Network Convergence will indeed bring multiple teams together across the IT organization: server team, network team, and storage team to name a few. There is no preset answer, and the outcome will be on a case by case basis, but ultimately IT organizations will need to figure out how a common, shared resource (the network/fabric) ought to be managed and where the new ownership boundaries would need to be drawn. Question: Will there be or is there currently a NDMP equivalent for iSCSI or 10GbE? There is no equivalent to NDMP for iSCSI. NDMP is a management protocol used to backup server data to network storage devices using NFS or CIFS. SNIA oversees the development of this protocol today. Question: How does the presenter justify the statement of "no need for specialized" knowledge or tools?   Given how iSCSI uses new protocols and concepts not found in traditional LAN, how could he say that? While it's true that iSCSI comes with its own concepts and subtleties, the point being made centered around how pervasive and widespread the underlying Ethernet know-how and expertise is. Question: FC vs IP storage. What does IDC count if the array has both FC and IP storage which group does it go in. If a customer buys an array but does not use one of the two protocols will that show up in IDC numbers? This info conflicts SNIA's numbers. We can't speak to the exact methods used to generate the analyst data. Each analyst firm has their own method for collecting and analyzing industry data. The reason for including the data was to discuss the overall industry trends. Question: I noticed in the high-level overview that FCoE appeared not to be a 'mesh' network. How will this deal w/multipathing and/or failover? The diagrams only showed a single path for FCoE to simplify the discussion on network convergence.   In a real-world, best-practices deployment there would be multiple paths with failover.     FCoE uses the same multipathing and failover capabilities that are available for Fibre Channel. Question: Why are you including FCoE in IP-based storage? The graph should indeed have read Ethernet storage rather than IP storage. This was fixed after the webinar and before the presentation got posted on SNIA's website.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everything You Need to Know About iSCSI

Jason Blosil

Apr 11, 2011

title of post
Are you considering deploying an iSCSI storage network, and would like to learn some of the best practices of configuring the environment, from host to storage? Well, now you can learn from an expert. The SNIA Ethernet Storage Forum will be sponsoring a live webcast with our guest speaker, Dennis Martin from Demartek. Dennis will share first-hand expertise and actionable best practices to effectively deploy iSCSI storage networks. A live Q&A will also be included. It doesn't matter if you have a large, medium or small environment, Dennis will provide application specific recommendations that you won't want to miss. When: April 21st Time: 8:00 am PT / 11:00 am ET Free registration: http://www.brighttalk.com/webcast/26785 The SNIA ESF has several other web events planned for the rest of this calendar year.   Let us know what topics are important to you. We want to make these events highly educational.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ethernet Storage Market Momentum Continues

David Dale

Sep 24, 2010

title of post

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 - in fact we're getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC's recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

22%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Storage over Ethernet SIG