Storage Congestion on the Network Q&A

Tim Lustig

Jul 1, 2019

title of post
As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That's why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic. The audience at the live event asked some great questions, as promised, here are answers to them all. Q. How many IP switch vendors today support Data Center TCP (DCTCP)? A. In order to maintain vendor neutrality, we won't get into the details. But several IP switch vendors do support DCTCP. Note that many Ethernet switches support basic explicit congestion notification (ECN), but DCTCP requires a more detailed version of ECN marking on the switch and also requires that at least some of the endpoints (servers and storage) support DCTCP. Q. One point I missed around ECN/DCTCP was that the configuration for DCTCP on the switches is virtually identical to what you need to set for DCQCN (RoCE) - but you'd still want two separate queues between DCTCP and RoCE since they don't really play along well. A. Yes, RoCE congestion control also takes advantage of ECN and has some similarities to DCTCP. Using different priorities for DCTCP. If using Priority Flow Control (PFC), where RoCE is being kept in a no-drop traffic class, you will want to ensure that RoCE storage traffic and TCP-based storage traffic are in separate priorities. If you are not using lossless transport for RoCE, however, using different priorities for DCTCP and RoCE traffic is recommended, but not required. Q. Is over-subscription a case of the server and/or switch/endpoint being faster than the link? A. Over-subscription is not usually caused when one server is faster than one link; in that case the fast server's throughput is simply limited to the link speed (like a 32G FC HBA plugged into a 16G FC switch port). But over-subscription can be caused when multiple nodes write or read more data than one switch port, switch, or switch uplink can handle. For example if six 8G Fibre Channel nodes are connected to one 16G FC switch port, that port is 3X oversubscribed. Or if sixteen 16G FC servers connect to a switch and all of them simultaneously try to send or receive traffic to the rest of the network through two 64G FC switch uplinks, then those switch uplinks are 2X oversubscribed (16x16G is two times the bandwidth of 2x64G). Similar oversubscription scenarios can be created with Ethernet and InfiniBand. Oversubscription is not always bad especially if the "downstream" links are not all expected to be handling data at full line rate all of the time. Q. Can't the switch regulate the incoming flow? A. Yes if you have flow control or a lossless network then each switch can pause incoming traffic on any port if its buffers are getting too full. However, if the switch pauses incoming traffic for too long in a lossless network, this can cause congestion to spread to nearby switches. In a lossy network, the switch could also selectively drop packets to signal congestion to the senders. While the lossless mechanism allows the switch to regulate the incoming flow and deal with congestion, it does not avoid congestion. To avoid too much traffic being generated in the first place, the traffic sources (server or storage) need to throttle the transmission rate. The aggregate traffic generated across all sources to one destination needs to be lower than the link speed of the destination port to prevent oversubscription. The FC standards committee is working on exactly such a proposal. See answer to question below. Q. Is the FC protocol considering a back-off mechanism like DCTCP? A. The Fibre Channel standards organization T11 recently began investigating methods for providing notifications from the Fabric to the end devices to address issues associated with link integrity, congestion, and discarded frames. This effort began in December 2018 and is expected to be complete in 2019. Q. Do long distance FC networks need to have giant buffers to handle all the data required to keep the link full for the time that it takes to release credit? If not, how is the long-distance capability supported at line speed, given the time delay to return credit? A. As the link comes up, the transmitter is initialized credits equal to the number of buffers available in the receiver. This preloaded credits for the transmitter has to be sufficiently high to allow for the time it takes for credits to come back from receiver. Longer delay in credit return requires higher number of buffers/credits to maintain maximum performance on the link. In general, the credit delay increases with link distance because of the increased propagation delay for the frame from transmitter to receiver and for the credit from receiver to transmitter. So, yes you do need more buffers for longer distance. This is true with any lossless network - Fibre Channel, InfiniBand and lossless Ethernet. Q. Shouldn't storage systems have the same credit-based system to regulate the incoming flow to the switch from the storage systems? A. Yes, in a credit-based lossless network (Fibre Channel or InfiniBand) every port, including the port on the storage system, is required to implement the credit-based system to maintain the lossless characteristics. This allows the switch to control how much traffic is sent by the storage system to switch. Q. Is the credit issuance from the switch or from the tx device? A. The credit mechanism works in both ways on a link, bidirectionally. So if a server is exchanging data with a switch, the switch uses credits to regulate traffic coming from the server and the server uses credits to regulate traffic coming from the switch. This mechanism is the same on every Fibre Channel link be it Server-to-Switch, Switch-to-Switch or Switch-to-Server. Q. Can you comment on DCTCP (datacenter TCP), and the current work @IETF (L4S - low loss, low latency, scalable transport)? A. There are several possible means by which congestion can be observed and quite a few ways of managing that congestion. ECN and DCTCP were selected for the simple reason that they are established technologies (even if not widely known), and have been completed. As the commenter notes, however, there are other means by which congestion is being handled. One of these is L4S, which is currently (as of this writing) a work in progress in the IETF. Learn more here. Q. Virtual Lanes / Virtual Channel would be equivalent to Priority Flow control - the trick is, that in standard TCP/IP, no one really uses different queues/ PCP / QoS to really differentiate between flows of the same application but different sessions, only different applications (VoIP, Data, Storage, ...) A. This is not quite correct. PFC has to do with an application of flow control upon a priority; it's not the same thing as a priority/virtual lane/virtual channel itself. The commenter is correct, however, that most people do not see a need for isolating out storage applications on their TCP priorities, but then they wonder why they're not getting stellar performance. Q. Can every ECN capable switch be configured to support DCTCP? A. Switches are, by their nature, stateless. That means that there is no need for a switch to be 'configured' for DCTCP, regardless of whether or not ECN is being used. So, in the strictest sense, any switch that is capable of ECN is already "configured" for DCTCP. Q. Is it true that admission control (FC buffer credit scheme) has the drawback of usually underutilization of the links...especially if your workload uses many small frames, rather than full-sized frames? A. This is correct in certain circumstances. Early in the presentation we discussed how it's important to plan for the  application, not the protocol (see slide #9).  As noted in the presentation, "the application is King."
Part of the process of architecting good FC design is to ensure that the proper oversubscription ratios are used (i.e., oversubscription involves the amount of host devices that are allowed to connect to each storage device). These oversubscription ratios are identified by the applications that have specific requirements, such as databases, etc. If a deterministic network like Fibre Channel is not architected with this in mind, it will indeed seem like a drawback.
       

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Congestion on the Network Q&A

Tim Lustig

Jul 1, 2019

title of post
As more storage traffic traverses the network, the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput has become common. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast earlier this month, Introduction to Incast, Head of Line Blocking, and Congestion Management. In this webcast (which is now available on-demand), our SNIA experts discussed how Ethernet, Fibre Channel and InfiniBand each handles increased traffic. The audience at the live event asked some great questions, as promised, here are answers to them all. Q. How many IP switch vendors today support Data Center TCP (DCTCP)? A. In order to maintain vendor neutrality, we won’t get into the details. But several IP switch vendors do support DCTCP. Note that many Ethernet switches support basic explicit congestion notification (ECN), but DCTCP requires a more detailed version of ECN marking on the switch and also requires that at least some of the endpoints (servers and storage) support DCTCP. Q. One point I missed around ECN/DCTCP was that the configuration for DCTCP on the switches is virtually identical to what you need to set for DCQCN (RoCE) – but you’d still want two separate queues between DCTCP and RoCE since they don’t really play along well. A. Yes, RoCE congestion control also takes advantage of ECN and has some similarities to DCTCP. Using different priorities for DCTCP. If using Priority Flow Control (PFC), where RoCE is being kept in a no-drop traffic class, you will want to ensure that RoCE storage traffic and TCP-based storage traffic are in separate priorities. If you are not using lossless transport for RoCE, however, using different priorities for DCTCP and RoCE traffic is recommended, but not required. Q. Is over-subscription a case of the server and/or switch/endpoint being faster than the link? A. Over-subscription is not usually caused when one server is faster than one link; in that case the fast server’s throughput is simply limited to the link speed (like a 32G FC HBA plugged into a 16G FC switch port). But over-subscription can be caused when multiple nodes write or read more data than one switch port, switch, or switch uplink can handle. For example if six 8G Fibre Channel nodes are connected to one 16G FC switch port, that port is 3X oversubscribed. Or if sixteen 16G FC servers connect to a switch and all of them simultaneously try to send or receive traffic to the rest of the network through two 64G FC switch uplinks, then those switch uplinks are 2X oversubscribed (16x16G is two times the bandwidth of 2x64G). Similar oversubscription scenarios can be created with Ethernet and InfiniBand. Oversubscription is not always bad especially if the “downstream” links are not all expected to be handling data at full line rate all of the time. Q. Can’t the switch regulate the incoming flow? A. Yes if you have flow control or a lossless network then each switch can pause incoming traffic on any port if its buffers are getting too full. However, if the switch pauses incoming traffic for too long in a lossless network, this can cause congestion to spread to nearby switches. In a lossy network, the switch could also selectively drop packets to signal congestion to the senders. While the lossless mechanism allows the switch to regulate the incoming flow and deal with congestion, it does not avoid congestion. To avoid too much traffic being generated in the first place, the traffic sources (server or storage) need to throttle the transmission rate. The aggregate traffic generated across all sources to one destination needs to be lower than the link speed of the destination port to prevent oversubscription. The FC standards committee is working on exactly such a proposal. See answer to question below. Q. Is the FC protocol considering a back-off mechanism like DCTCP? A. The Fibre Channel standards organization T11 recently began investigating methods for providing notifications from the Fabric to the end devices to address issues associated with link integrity, congestion, and discarded frames. This effort began in December 2018 and is expected to be complete in 2019. Q. Do long distance FC networks need to have giant buffers to handle all the data required to keep the link full for the time that it takes to release credit? If not, how is the long-distance capability supported at line speed, given the time delay to return credit? A. As the link comes up, the transmitter is initialized credits equal to the number of buffers available in the receiver. This preloaded credits for the transmitter has to be sufficiently high to allow for the time it takes for credits to come back from receiver. Longer delay in credit return requires higher number of buffers/credits to maintain maximum performance on the link. In general, the credit delay increases with link distance because of the increased propagation delay for the frame from transmitter to receiver and for the credit from receiver to transmitter. So, yes you do need more buffers for longer distance. This is true with any lossless network – Fibre Channel, InfiniBand and lossless Ethernet. Q. Shouldn’t storage systems have the same credit-based system to regulate the incoming flow to the switch from the storage systems? A. Yes, in a credit-based lossless network (Fibre Channel or InfiniBand) every port, including the port on the storage system, is required to implement the credit-based system to maintain the lossless characteristics. This allows the switch to control how much traffic is sent by the storage system to switch. Q. Is the credit issuance from the switch or from the tx device? A. The credit mechanism works in both ways on a link, bidirectionally. So if a server is exchanging data with a switch, the switch uses credits to regulate traffic coming from the server and the server uses credits to regulate traffic coming from the switch. This mechanism is the same on every Fibre Channel link be it Server-to-Switch, Switch-to-Switch or Switch-to-Server. Q. Can you comment on DCTCP (datacenter TCP), and the current work @IETF (L4S – low loss, low latency, scalable transport)? A. There are several possible means by which congestion can be observed and quite a few ways of managing that congestion. ECN and DCTCP were selected for the simple reason that they are established technologies (even if not widely known), and have been completed. As the commenter notes, however, there are other means by which congestion is being handled. One of these is L4S, which is currently (as of this writing) a work in progress in the IETF. Learn more here. Q. Virtual Lanes / Virtual Channel would be equivalent to Priority Flow control – the trick is, that in standard TCP/IP, no one really uses different queues/ PCP / QoS to really differentiate between flows of the same application but different sessions, only different applications (VoIP, Data, Storage, …) A. This is not quite correct. PFC has to do with an application of flow control upon a priority; it’s not the same thing as a priority/virtual lane/virtual channel itself. The commenter is correct, however, that most people do not see a need for isolating out storage applications on their TCP priorities, but then they wonder why they’re not getting stellar performance. Q. Can every ECN capable switch be configured to support DCTCP? A. Switches are, by their nature, stateless. That means that there is no need for a switch to be ‘configured’ for DCTCP, regardless of whether or not ECN is being used. So, in the strictest sense, any switch that is capable of ECN is already “configured” for DCTCP. Q. Is it true that admission control (FC buffer credit scheme) has the drawback of usually underutilization of the links…especially if your workload uses many small frames, rather than full-sized frames? A. This is correct in certain circumstances. Early in the presentation we discussed how it’s important to plan for the application, not the protocol (see slide #9 from the presentation). As noted in the presentation, “the application is King.” Part of the process of architecting good FC design is to ensure that the proper oversubscription ratios are used (oversubscription involves the amount of host devices that are allowed to connect to each storage device). These oversubscription ratios are identified by the applications that have specific requirements, such as databases, etc. If a deterministic network like Fibre Channel is not architected with this in mind, it will indeed seem like a drawback.”      

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Network Speeds Questions Answered

John Kim

Jun 25, 2019

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a webcast on how increases in networking speeds are impacting storage. If you missed the live webcast, New Landscape of Network Speeds, it's now available on-demand. We received several interesting questions on this topic. Here are our experts' answers: Q. What are the cable distances for 2.5 and 5G Ethernet? A. 2.5GBASE-T and 5GBASE-T Ethernet are designed to run on existing UTP cabling, so it should reach 100 meters on both Cat5e and Cat6 cabling. Reach of 5GBASE-T on Cat 5e may be less under some conditions, for example if many cables are bundled tightly together. Cabling guidelines and field test equipment are available to aid in the transition. Q. Any comments on why U.2 drives are so rare/uncommon in desktop PC usage? M.2 are very common in laptops, and some desktops, but U.2's large capacity seems a better fit for desktop. A. M.2 SSDs are more popular for laptops and tablets due to their small form factor and sufficient capacity.  U.2 SSDs are used more often in servers, though some desktops and larger laptops also use a U.2 SSD for the larger capacity.   Q. What about using Active Copper cables to get a bit more reach over Passive Copper cables before switching to Active Optical cables? A. Yes active copper cables can provide longer reach than passive copper cables, but you have to look at the expense and power consumption. There may be many cases where using an active optical cable (AOC) will cost the same or less than an active copper cable. Q. For 100Gb/s signaling (future standard) is it expected to work over copper cable (passive or active) or only optical? A. Yes, though the maximum distances will be shorter. With 25Gb/s signaling the maximum copper cable length is 5m. With 50Gb/s signaling the longest copper cables are 3m long. With 100Gb/s we expect the longest copper cables will be about 2m long. Q. So what do you see as the most prevalent LAN speed today and what do you see in next year or two? A. For Ethernet, we see desktops mostly on 1Gb with some moving to 2.5G, 5Gb or 10Gb. Older servers are largely 10Gb but new servers are mostly using 25GbE or 50GbE, while the most demanding servers and fastest flash storage arrays have 100GbE connections. 200GbE will show up in a few servers starting in late 2019, but most 200GbE and 400GbE usage will be for switch-to-switch links during the next few years. In the world of Fibre Channel, most servers today are on 16G FC with a few running 32G and a few of the most demanding servers or fastest flash storage arrays using 64G. 128G FC for now will likely be just for switch-to-switch links. Finally for InfiniBand deployments, older servers are running FDR (56Gb/s) and newer servers are using EDR (100Gb/s). The very newest, fastest HPC and ML/AI servers are starting to use HDR (200Gb/s) InfiniBand. If you're new to SNIA NSF, we encourage you to check out the SNIA NSF webcast library. There you'll find more than 60 educational, vendor-neutral on-demand webcasts produced by SNIA experts.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Network Speeds Questions Answered

John Kim

Jun 25, 2019

title of post
Last month, the SNIA Networking Storage Forum (NSF) hosted a webcast on how increases in networking speeds are impacting storage. If you missed the live webcast, New Landscape of Network Speeds, it’s now available on-demand. We received several interesting questions on this topic. Here are our experts’ answers: Q. What are the cable distances for 2.5 and 5G Ethernet? A. 2.5GBASE-T and 5GBASE-T Ethernet are designed to run on existing UTP cabling, so it should reach 100 meters on both Cat5e and Cat6 cabling. Reach of 5GBASE-T on Cat 5e may be less under some conditions, for example if many cables are bundled tightly together. Cabling guidelines and field test equipment are available to aid in the transition. Q. Any comments on why U.2 drives are so rare/uncommon in desktop PC usage? M.2 are very common in laptops, and some desktops, but U.2’s large capacity seems a better fit for desktop. A. M.2 SSDs are more popular for laptops and tablets due to their small form factor and sufficient capacity. U.2 SSDs are used more often in servers, though some desktops and larger laptops also use a U.2 SSD for the larger capacity.   Q. What about using Active Copper cables to get a bit more reach over Passive Copper cables before switching to Active Optical cables? A. Yes active copper cables can provide longer reach than passive copper cables, but you have to look at the expense and power consumption. There may be many cases where using an active optical cable (AOC) will cost the same or less than an active copper cable. Q. For 100Gb/s signaling (future standard) is it expected to work over copper cable (passive or active) or only optical? A. Yes, though the maximum distances will be shorter. With 25Gb/s signaling the maximum copper cable length is 5m. With 50Gb/s signaling the longest copper cables are 3m long. With 100Gb/s we expect the longest copper cables will be about 2m long. Q. So what do you see as the most prevalent LAN speed today and what do you see in next year or two? A. For Ethernet, we see desktops mostly on 1Gb with some moving to 2.5G, 5Gb or 10Gb. Older servers are largely 10Gb but new servers are mostly using 25GbE or 50GbE, while the most demanding servers and fastest flash storage arrays have 100GbE connections. 200GbE will show up in a few servers starting in late 2019, but most 200GbE and 400GbE usage will be for switch-to-switch links during the next few years. In the world of Fibre Channel, most servers today are on 16G FC with a few running 32G and a few of the most demanding servers or fastest flash storage arrays using 64G. 128G FC for now will likely be just for switch-to-switch links. Finally for InfiniBand deployments, older servers are running FDR (56Gb/s) and newer servers are using EDR (100Gb/s). The very newest, fastest HPC and ML/AI servers are starting to use HDR (200Gb/s) InfiniBand. If you’re new to SNIA NSF, we encourage you to check out the SNIA NSF webcast library. There you’ll find more than 60 educational, vendor-neutral on-demand webcasts produced by SNIA experts.  

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Intro to Incast, Head of Line Blocking, and Congestion Management

Tim Lustig

May 15, 2019

title of post
For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think. The three main storage network transports - Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, utilizing a protocol such as NVMe over Fabrics will offer very different methodologies for handling congestion avoidance, burst handling, and queue management when looking at one networking in comparison to another. Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. That's why the SNIA Networking Storage Forum (NSF) is hosting a live webcast on June 18, 2019, Introduction to Incast, Head of Line Blocking, and Congestion Management where our NSF experts will cover:
  • Typical storage traffic patterns
  • What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
  • How Ethernet, Fibre Channel, InfiniBand handle these effects
  • The proper role of buffers in handling storage network traffic
  • Potential new ways to handle increasing storage traffic loads on the network
Register today to save your spot for June 18th. As always, our experts will be available to answer your questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Intro to Incast, Head of Line Blocking, and Congestion Management

Tim Lustig

May 15, 2019

title of post
For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think. The three main storage network transports – Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, utilizing a protocol such as NVMe over Fabrics will offer very different methodologies for handling congestion avoidance, burst handling, and queue management when looking at one networking in comparison to another. Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast on June 18, 2019, Introduction to Incast, Head of Line Blocking, and Congestion Management where our NSF experts will cover:
  • Typical storage traffic patterns
  • What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
  • How Ethernet, Fibre Channel, InfiniBand handle these effects
  • The proper role of buffers in handling storage network traffic
  • Potential new ways to handle increasing storage traffic loads on the network
Register today to save your spot for June 18th. As always, our experts will be available to answer your questions. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact of New Network Speeds on Storage

John Kim

Apr 26, 2019

title of post
In the last few years, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speed. But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards and bandwidth keep up? And do the faster speeds come with different distance limitations? These are among the questions our panel of experts will answer at the next live SNIA Networking Storage Forum (NSF) webcast on May 21, 2019, "New Landscape of Network Speeds." Join us to learn:
  • How these new speeds are achieved
  • Where they are likely to be deployed for storage
  • What infrastructure changes are needed to support them
Register today to save your spot. And don't forget to bring your questions. Our experts will be available to answer them on the spot.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Impact of New Network Speeds on Storage

John Kim

Apr 26, 2019

title of post
In the last few years, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speed. But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards and bandwidth keep up? And do the faster speeds come with different distance limitations? These are among the questions our panel of experts will answer at the next live SNIA Networking Storage Forum (NSF) webcast on May 21, 2019, “New Landscape of Network Speeds.” Join us to learn:
  • How these new speeds are achieved
  • Where they are likely to be deployed for storage
  • What infrastructure changes are needed to support them
Register today to save your spot. And don’t forget to bring your questions. Our experts will be available to answer them on the spot.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Networking Storage Forum

John Kim

Oct 9, 2018

title of post

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our "Everything You Wanted To Know About Storage But Were Too Proud To Ask" series, to the absolutely phenomenal (and required viewing) "Storage Performance Benchmarking" series to the "Great Storage Debates" series, we've produced dozens of hours of material.

Technologies have evolved and we've come to a point where there's a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF - there is more to storage networking than just your favorite transport. For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at "Storage as a Service," we need to understand how we can create workable systems from all the tools at our disposal. Bigger Than Our Britches As I said, SNIA's Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking. So... Introducing the Networking Storage Forum. In this group we're going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We'll be talking about:
  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies
... and more! Generally speaking, we'll continue to do the same great work that we've been doing, but now our name more accurately reflects the breadth of work that we do. We're excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you're one of our loyal followers, we hope you will continue to keep track of what we're doing. And if you're new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There's a wealth of un-biased, educational information there, we don't think you'll find anywhere else! If there's something that you'd like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF:    

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Introducing the Networking Storage Forum

John Kim

Oct 9, 2018

title of post

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport. For example, when we talk about NVMe™ over Fabrics, the protocol is broader than just one way of accomplishing what you need. When we talk about virtualized environments, we need to examine the nature of the relationship between hypervisors and all kinds of networks. When we look at “Storage as a Service,” we need to understand how we can create workable systems from all the tools at our disposal. Bigger Than Our Britches As I said, SNIA’s Ethernet Storage Forum has been working to bring these new technologies to the forefront, so that you can see (and understand) the bigger picture. To that end, we realized that we needed to rethink the way that our charter worked, to be even more inclusive of technologies that were relevant to storage and networking. So… Introducing the Networking Storage Forum. In this group we’re going to continue producing top-quality, vendor-neutral material related to storage networking solutions. We’ll be talking about:
  • Storage Protocols (iSCSI, FC, FCoE, NFS, SMB, NVMe-oF, etc.)
  • Architectures (Hyperconvergence, Virtualization, Storage as a Service, etc.)
  • Storage Best Practices
  • New and developing technologies
… and more! Generally speaking, we’ll continue to do the same great work that we’ve been doing, but now our name more accurately reflects the breadth of work that we do. We’re excited to launch this new chapter of the Forum. If you work for a vendor, are a systems integrator, university or someone who manages storage, we welcome you to join the NSF. We are an active group that honestly has a lot of fun. If you’re one of our loyal followers, we hope you will continue to keep track of what we’re doing. And if you’re new to this Forum, we encourage you to take advantage of the library of webcasts, white papers, and published articles that we have produced here. There’s a wealth of un-biased, educational information there, we don’t think you’ll find anywhere else! If there’s something that you’d like to hear about – let us know! We are always looking to hear about headaches, concerns, and areas of confusion within the industry where we can shed some light. Stay current with all things NSF:    

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Fibre Channel