Author:

AlexMcDonald

 
 
author

How Can You Keep Data in Transit Secure?

AlexMcDonald

Oct 12, 2020

title of post
It’s well known that data is often considered less secure while in motion, particularly across public networks, and attackers are finding increasingly innovative ways to snoop on and compromise data in flight. But risks can be mitigated with foresight and planning. So how do you adequately protect data in transit? It’s the next topic the SNIA Networking Storage Forum (NSF) will tackle as part of our Storage Networking Security Webcast Series.  Join us October 28, 2020 for our live webcast Securing Data in Transit. In this webcast, we’ll cover what the threats are to your data as it’s transmitted, how attackers can interfere with data along its journey, and methods of putting effective protection measures in place for data in transit. We’ll discuss:
  • The large attack surface that data in motion provides, and an overview of the current threat landscape
  • What transport layer security protocols (SSL, TLS, etc.) are best for protecting data in transit?
  • Different encryption technologies and their role in protecting data in transit
  • A look at Fibre Channel security
  • Current best practice deployments; what do they look like?
Register today and join us on a journey to provide safe passage for your data.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Not Again! Data Deduplication for Storage Systems

AlexMcDonald

Oct 7, 2020

title of post
As explained in our webcast on Data Reduction, “Everything You Wanted to Know About Storage But Were Too Proud to Ask: Data Reduction,” organizations inevitably store many copies of the same data. Intentionally or inadvertently, users and applications copy and store the same files over and over; with developers, testers and analysts keeping many more copies. And backup programs copy the same or only slightly modified files daily, often to multiple locations and storage devices.  It’s not unusual to end up with some data replicated thousands of times, enough to drive storage administrators and managers of IT budgets crazy. So how do we stop the duplication madness? Join us on November 10, 2020 for a live SNIA Networking Storage Forum (NSF) webcast, “Not Again! Data Deduplication for Storage Systems”  where our SNIA experts will discuss how to reduce the number of copies of data that get stored, mirrored, and backed up. Attend this sanity-saving webcast to learn more about:
  • Eliminating duplicates at the desktop, server, storage or backup device
  • Dedupe technology, including local vs global deduplication
  • Avoiding or reducing making copies of data (non-duplication)
  • Block-level vs. file- or object-level deduplication
  • In-line vs. post-process deduplication
  • More efficient backup techniques
Register today (but only once please) for this webcast so you can start saving space and end the extra data replication.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Non-Cryptic Answers to Common Cryptography Questions

AlexMcDonald

Sep 23, 2020

title of post
The SNIA Networking Storage Forum’s Storage Networking Security Webcast Series continues to examine the many different aspects of storage security. At our most recent webcast on applied cryptography, our experts dove into user authentication, data encryption, hashing, blockchain and more. If you missed the live event, you can watch it on-demand. Attendees of the live event had some very interesting questions on this topic and here are answer to them all: Q. Can hashes be used for storage deduplication?  If so, do the hashes need to be 100% collision-proof to be used for deduplication? A. Yes, hashes are often used for storage deduplication. It’s preferred that they be collision-proof but it’s not required if the deduplication software does a bit-by-bit comparison of any files that produce the same hash in order to verify if they really are identical or not. If the hash is 100% collision-proof then there is no need to run bit-by-bit comparisons of files that produce the same hash value. Q. Do cloud or backup service vendors use blockchain proof of space to prove to customers how much storage space is available or has been reserved?    A. There are some vendors who are using proof of space to map or plot the device. Once the device is plotted you can have a report which provides the summary of storage space available. Some vendors use it today. Since mining is the most popular application today, mining users use this information to report available space for mining pool applications. Can you use it for enterprise cloud to monitor the available disk space – absolutely. Q. If a vendor provides a guarantee of space to a customer using blockchain, does something prevent them from filling up the space before the customer uses that space? A. Once the disk is plotted there is no way for any other application to use it. It will be flagged as an error. In fact, it’s a really great way to ensure that no attacks are occurring on the disk itself. Each block of space is mapped and indexed. Q. I lost track during the explanation about proofs in blockchain, what are those algorithms used for? A. There are two concepts which are normally discussed and create the confusion. One is that Blockchain can use different cryptographic hash algorithms such as SHA-256 (one of the most popular), Whirpool, RIPEMD (RACE Integrity Primitives Evaluation Message Digest), Dagger-Hashimoto and others). Mercle tree is a blockchain construct which allows one to build a chain by using hashes and data blocks. Consensus protocols is protocol for decision making such as Proof of Work, Proof of Space, Proof of Stake and etc. Each consensus protocol is using the distributed ledger to make a record for the block of data transferred. Use of cryptography hashes allows us to create trustless concept with encrypting data which is being transferred from point A to point B. The consensus protocol allows us to keep the record of the data blocks in distributed ledgers. This is a brief answer to the question and if you would like to get additional information please contract olga@myactionspot.com I will be happy to deliver the detailed session to address this topic. Q. How does encryption work in Storage Replication? Please advise whether this exists? A. Yes it exists. Encryption can be applied to data at rest and that encrypted data can be replicated, and/or the replication process can encrypt the data temporarily while it’s in transit. Q. Regarding blockchain: assuming a new transaction (nobody has information yet), is it possible that when sending the broadcast someone modifies part of the data (0.1% for example) and this data continues to travel over the network without being considered corrupted? A. The first block of data which is building the first blockchain creates the authenticity. If the block and hash just created are originals they will be accepted as originals, recorded in distributed ledger and moved across the chain. BUT if you are attempting to send a block on a blockchain which is already authenticated this block will be not authenticated and discarded once it’s on the chain. Remember we said this was part of a series? We’ve already had a lot of great experts cover a wide range of storage security topics. You can access all of them at the SNIA Educational Library.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Applied Cryptography Techniques and Use Cases

AlexMcDonald

Jul 15, 2020

title of post
The rapid growth in infrastructure to support real time and continuous collection and sharing of data to make better business decisions has led to an age of unprecedented information storage and easy access. While collection of large amounts of data has increased knowledge and allowed improved efficiencies for business, it has also made attacks upon that information—theft, modification, or holding it for ransom — more profitable for criminals and easier to accomplish. As a result, strong cryptography is often used to protect valuable data. The SNIA Networking Storage Forum (NSF) has recently covered several specific security topics as part of our Storage Networking Security Webcast Series, including Encryption101, Protecting Data at Rest, and Key Management 101. Now, on August 5, 2020, we are going to present Applied Cryptography. In this webcast, our SNIA experts will present an overview of cryptography techniques for the most popular and pressing use cases. We’ll discuss ways of securing data, the factors and trade-off that must be considered, as well as some of the general risks that need to be mitigated. We’ll be looking at:
  • Encryption techniques for authenticating users
  • Encrypting data—either at rest or in motion
  • Using hashes to authenticate information coding and data transfer methodologies
  • Cryptography for Blockchain
As the process for storing and transmitting data securely has evolved, this Storage Networking Security Series provides ongoing education for placing these very important parts into the much larger whole. We hope you can join us as we spend some time on this very important piece of the data security landscape. Register here to save your spot.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

AlexMcDonald

May 27, 2020

title of post

Ever wonder how encryption actually works? Experts, Ed Pullin and Judy Furlong, provided an encryption primer to hundreds of attendees at our SNIA NSF webcast Storage Networking Security: Encryption 101. If you missed it, It’s now available on-demand. We promised during the live event to post answers to the questions we received. Here they are:

Q. When using asymmetric keys, how often do the keys need to be changed?

A. How often asymmetric (and symmetric) keys need to be changed is driven by the purpose the keys are used for, the security policies of the organization/environment in which they are used and the length of the key material. For example, the CA/Browser Forum has a policy that certificates used for TLS (secure communications) have a validity of no more than two years.

Q.
In earlier slides there was a mention that information can only be decrypted
via private key (not public key). So, was Bob’s public key retrieved using the
public key of signing authority?

A.
In asymmetric cryptography the opposite key is needed to reverse the encryption
process.  So, if you encrypt using Bob’s
private key (normally referred to a digital signature) then anyone can use his
public key to decrypt.  If you use Bob’s
public key to encrypt, then his private key should be used to decrypt.  Bob’s public key would be contained in the
public key certificate that is digitally signed by the CA and can be extracted
from the certificate to be used to verify Bob’s signature.

Q.
Do you see TCG Opal 2.0 or TCG for Enterprise as requirements for drive
encryption? What about the FIPS 140-2 L2 with cryptography validated by 3rd
party NIST? As NIST was the key player in selecting AES, their stamp of
approval for a FIPS drive seems to be the best way to prove that the
cryptographic methods of a specific drive are properly implemented.

A.
Yes, the TCG Opal 2.0 and TCG for Enterprise standards are generally recognized
in the industry for self-encrypting drives (SEDs)/drive level encryption. FIPS
140 cryptographic module validation is a requirement for sale into the U.S.
Federal market and is also recognized in other verticals as well.     Validation of the algorithm implementation
(e.g. AES) is part of the FIPS 140 (Cryptographic Module Validation Program
(CMVP)) companion Cryptographic Algorithm Validation Program (CAVP).

Q.
Can you explain Constructive Key Management (CKM) that allows different keys
given to different parties in order to allow levels of credentialed access to
components of a single encrypted object?

A.
Based on the available descriptions of CKM, this approach is using a
combination of key derivation and key splitting techniques. Both of these
concepts will be covered in the upcoming Key
Management 101 webinar
. An overview of CKM can be found in  this Computer
World article
(box at the top right). 

Q.
Could you comment on Zero Knowledge Proofs and Digital Verifiable Credentials
based on Decentralized IDs (DIDs)?

A.
A Zero Knowledge Proof is a cryptographic-based method for being able to prove
you know something without revealing what it is. This is a field of
cryptography that has emerged in the past few decades and has only more
recently transitioned from a theoretical research to a practical implementation
phase with crypto currencies/blockchain and multi-party computation (privacy
preservation).

Decentralized IDs (DIDs) is an authentication approach which leverages
blockchain/decentralized ledger technology. Blockchain/decentralized ledgers
employ cryptographic techniques and is an example of applying cryptography and
uses several of the underlying cryptographic algorithms described in this 101
webinar.

Q.
Is Ed saying every block should be encrypted with a different key?

A.
No. we believe the confusion was over the key transformation portion of Ed’s
diagram.  In the AES Algorithm a key
transformation occurs that uses the initial key as input, and provides the AES rounds
their own key.  This Key expansion is
part of the AES Algorithm itself and is known as the Key Schedule.

Q.
Where can I learn more about storage security?

A.
Remember this Encryption 101 webcast was part of the SNIA Networking Storage
Forum’s Storage
Networking Security Webcast Series
. You can keep up with additional installments here and by
following us on Twitter @SNIANSF.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Encryption 101: Keeping Secrets Secret

AlexMcDonald

Apr 20, 2020

title of post
Encryption has been used through the ages to protect information, authenticate messages, communicate secretly in the open, and even to check that messages were properly transmitted and received without having been tampered with. Now, it’s our first go-to tool for making sure that data simply isn’t readable, hearable or viewable by enemy agents, smart surveillance software or other malign actors. But how does encryption actually work, and how is it managed? How do we ensure security and protection of our data, when all we can keep as secret are the keys to unlock it? How do we protect those keys; i.e., “Who will guard the guards themselves?” It’s a big topic that we’re breaking down into three sessions as part of our Storage Networking Security Webcast Series: Encryption 101, Key Management 101, and Applied Cryptography. Join us on May 20th for the first Encryption webcast: Storage Networking Security: Encryption 101 where our security experts will cover:
  • A brief history of Encryption
  • Cryptography basics
  • Definition of terms – Entropy, Cipher, Symmetric & Asymmetric Keys, Certificates and Digital signatures, etc. 
  • Introduction to Key Management
I hope you will register today to join us on May 20th. Our experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Networking for Hyperconvergence

AlexMcDonald

Dec 21, 2018

title of post
“Why can’t I add a 33rd node?” One of the great advantages of Hyperconverged infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking. In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate or misunderstand the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast “Networking Requirements for Hyperconvergence” on February 5, 2019. At this webcast, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
  • The impact of metadata on the network
  • What happens as we add additional nodes
  • How to right-size the network for growth
  • Networking best practices to make your HCI work better
  • And more…
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place. Register here to save your spot for February 5th. Our experts will be on hand to answer your questions. This webcast is the second installment of our Storage Networking series. Our first was “Networking Requirements for Ethernet Scale-Out Storage.” It’s available on-demand as are all our educational webcasts. I encourage you to peruse the more than 60 vendor-neutral presentations is the NSF webcast library at your convenience.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

FCoE vs. iSCSI vs. iSER: Get Ready for Another Great Storage Debate

AlexMcDonald

May 1, 2018

title of post
As a follow up our first two hugely successful “Great Storage Debate” webcasts, Fibre Channel vs. iSCSI and File vs. Block vs. Object Storage, the SNIA Ethernet Storage Forum will be presenting another great storage debate on June 21, 2018. This time we’ll take on FCoE vs. iSCSI vs. iSER. For those of you who’ve seen these webcasts, you know that the goal of these debates is not to have a winner emerge, but rather provide unbiased education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions. Here’s what you can expect from this session: One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective. Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying. That leads to several questions about FCoE, iSCSI and iSER:
  • If we can run various network storage protocols over Ethernet, what differentiates them?
  • What are the advantages and disadvantages of FCoE, iSCSI and iSER?
  • How are they structured?
  • What software and hardware do they require?
  • How are they implemented, configured and managed?
  • Do they perform differently?
  • What do you need to do to take advantage of them in the data center?
  • What are the best use cases for each?
Register today to join our SNIA experts as they answer all these questions and more on the next Great Storage Debate: FCoE vs. iSCSI vs. iSER. We look forward to seeing you on June 21st.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Fibre Channel vs. iSCSI – The Great Debate Generates Questions Galore

AlexMcDonald

Mar 7, 2018

title of post
The SNIA Ethernet Storage Forum recently hosted the first of our “Great Debates” webcasts on Fibre Channel vs. iSCSI. The goal of this series is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions. And it worked! Over 1,200 people have viewed the webcast in the first three weeks! And the comments from attendees were exactly what we had hoped for:

“A good and frank discussion about the two technologies that don’t always need to compete!”

Really nice and fair comparison guys. Always well moderated, you hit a lot of material in an hour. Thanks for your work!” 

“Very fair and balanced overview of the two protocols.”

“Excellent coverage of the topic. I will have to watch it again.”

If you missed the webcast, you can watch it on-demand at your convenience and download a copy of the slides. The debate generated many good questions and our expert speakers have answered them all: Q. What is RDMA? A. RDMA is an acronym for Remote Direct Memory Access. It is a part of a protocol through which memory addresses are exchanged between end points so that data is able to move directly from the memory in one end point over the network to the memory in the other end point, without involving the end point CPUs in the data transfer. Without RDMA, intermediate copies (sometimes, multiple copies) of the data are made on the source endpoint and the destination end point. RoCEv1, RoCEv2, iWARP, and, and InfiniBand are all protocols that are capable of performing RDMA transfers. iSER is iSCSI over RDMA often uses iWARP or RoCE. SRP is a SCSI RDMA protocol that runs only over InfiniBand. FC uses hardware based DMA to perform transfers without the need to make intermediate copies of the data, therefore RDMA is not needed for FC, and does not apply to FC. Q. Can multi-pathing be used for load balancing or high availability? A. Multi-pathing is used both for load balancing and for high availability. In an active-passive setup it is used only for high-availability, while in an active-active setup it is used for both. Q. Some companies are structured so that iSCSI is handled by network services and the storage team supports FC, so there is storage and network overlap. Network people should be aware of storage and reverse. A. Correct, one of the big tradeoffs between iSCSI and FC may end up not being a technology tradeoff at all. In some environments, the political and organizational structure plans as much a part of the technology decision as the technology itself. Strong TCP/IP network departments may demand that they manage everything, or they may demand that storage traffic be kept as far from their network as possible. Strong storage network departments may demand their own private networks (either TCP/IP for iSCSI, or FC). In the end, the politics may play as important a role in the decision of iSCSI vs. FC as the actual technology itself. Q. If you have an established storage network (i.e. FC/iSCSI) is there a compelling reason you would switch? A. Typically, installations grow by adding to their existing configuration (iSCSI installations typically add more iSCSI, and FC installations add more FC). Switching from one technology to another may occur for various reasons (for example, the requirements of the organization have changed such that the other technology better meets the organizational needs or a company merger dictates a change). Fibre Channel is at 32/128Gb now. iSCSI is already in product at 100Gb, and at 200/400Gb next and so on. In short, Ethernet currently has a shorter speed upgrade cycle than FC. This is especially important now that SSDs have arrived on the scene. With the performance available from the SSD’s, the SAN is now the potential choke point. With the arrival of Persistent Memory, this problem can be exacerbated yet again and there the choice of network architectures will be important. One of the reasons why people might switch has very little to do with the technology, but more to do with other ancillary reasons. For instance, iSCSI is something of an “outside-in” management paradigm, while Fibre Channel has more of an “inside-out” paradigm. That is, management is centralized in FC, where iSCSI has many more touch-points [link: http://brasstacksblog.typepad.com/brass-tacks/2012/02/fc-and-fcoe-versus-iscsi-network-centric-versus-end-node-centric-provisioning.html]. When it comes to consistency at scale, there are major differences in how each storage network handles management as well as performance. Likewise, if programmability and network ubiquity is more important, then Ethernet-based iSCSI is an appealing technology to consider. Q. Are certain storage vendors recommending FC over iSCSI for performance reasons because of how their array software works? A. Performance is not the only criteria, and vendors should be careful to assess their customers’ needs before recommending one solution over another. If you feel that a vendor is proposing X because, well, X is all they have, then push back and insist on getting some facts that support their recommendation. Q. Which is better for a backup solution?  A. Both FC and iSCSI can be used to backup data. If a backup array is emulating a tape library, this is usually easier to do with FC than with iSCSI. Keep in mind that many backup solutions will run their own protocol over Ethernet, without using either iSCSI or FC. Q. I disagree that Ethernet is cheaper. If you look at cost of the 10/25Gb SFP+/SFP28 transceivers required vs. 16/32Gb transceiver costs, the FC solution is on par or in some cases, cheaper than Ethernet solutions. If you limit Ethernet to 10GBASE-T, then yes, it is cheaper. A. This is part of comparing apples to apples (and not to pineapples). iSCSI solutions typically are available in a wider range of price choices from 1Gb to 100Gb speeds (there are more lower cost solutions available with iSCSI than with FC). But, when you compare environments with comparable features, typically, the costs of each solution are similar. Note that 10/25Gb Ethernet supports DAC (direct-attach copper) cables over short distances—such as within a rack or between adjacent racks—which do not require separate transceivers. Q. Do you know of a vendor that offers storage arrays with port speed higher than 10Gbs? How is 50Gbs and 100Gbs Ethernet relevant if it’s not available from storage vendors? A. It’s available now for top-of-rack switches and from flash storage startups, as well as a few large storage OEMs supporting 40GbE connections. Additional storage systems will adopt it when it becomes necessary to support greater than 1GB (that’s a gigabyte!) per second of data movement from a single port, and most storage systems already offer multiples of 10Gbps ports on a single system. 100GbE iSCSI is in qualification now and we expect there will be offerings from tier-1 storage OEMs later this year. Similarly, higher-speed Fibre Channel port speeds are in the works. However, it’s important to note that at the port level, speed is not the only consideration: port configuration becomes increasingly important (e.g., it is possible to aggregate Fibre Channel ports up to 16x the speed of each individual port; Ethernet aggregation is possible too, but it works differently). Q. Why there are so few vendors in FC space? A. Historically, FC started with many vendors. Over the life of FC development, a fair number of mergers and acquisitions has reduced the number of vendors in this space. Today, there are 2 primary switch vendors and 2 primary adapter vendors. Q. You talk about reliable, but how about stable and predictable? A. Both FC and iSCSI networks can be very stable and predictable. Because FC-SAN is deployed only for storage and has fewer vendors with well-known configurations, it may be easier to achieve the highest levels of stability and predictability with less effort when using FC-SAN. iSCSI/Ethernet networks have more setup options and more diagnostic or reporting tools available so may be easier to monitor and manage iSCSI networks at large scale once configured. Q. On performance, what’s the compare on speed related to IOPs for FC vs. iSCSI? A. IOPS is largely a function of latency and secondarily related to hardware offloads and bandwidth. For this reason, FC and iSCSI connections typically offer similar IOPS performance if they run at similar speeds and with similar amounts of hardware offload from the adapter or HBA. Benchmark reports showing very high IOPS performance for both iSCSI and Fibre Channel are available from 3rd party analysts. Q. Are there fewer FC ports due to the high usage of blade chassis that share access or due to more iSCSI usage? A. It is correct that most blade servers use Ethernet (FCoE, iSCSI, or NFS), but this is a case of comparing apples and pineapples. FC ports are used for storage in a data center. Ethernet ports can be used for storage in a data center, but they are also used in laptops and desktops for e-mail, web browsing; wireless control of IoT (Internet of Things – e.g., light bulbs, thermostats, etc.); cars (yes, modern automobiles have their own Ethernet network); and many other things. So, if you compare the number of data center storage ports to the number of every other port used for every other type of network traffic, yes, there will be a smaller number associated with only the data center storage ports. Q. Regarding iSCSI offload cards, we used to believe that software initiators were faster because they could leverage the fast chips in the server. Have iSCSI offload cards changed significantly in recent years? A. This has traditionally been a function of the iSCSI initiator offload architecture. A full/cmd offload solution tends to be slower since it executes the iSCSI stack on a slow processor firmware in the NIC. A modern PDU-based solution (such as supported by the Open-iSCSI on Linux), only offloads performance critical applications to the silicon and is just as low latency as the software initiator and perhaps lower. Q. I think one of the more important differences between FC and iSCSI is that a pure FC network is not routable whereas iSCSI is, because of the nature of the protocol stack each one relies on. Maybe in that sense iSCSI has an advantage, especially when we think in hybrid cloud scenarios increasingly more common today. Am I right? A. Routability is usually discussed in the context of the TCP/IP network layering model i.e. how traffic moves through different Ethernet switches/routers and IP domains to get from the source to the destination. iSCSI is built on top of TCP/IP, and, hence, iSCSI benefits from interoperating with existing Ethernet switching/routing infrastructure, and not requiring special gateways when leaving the data center, for example, in the hybrid cloud case. The industry has already developed other standards to carry Fibre Channel over IP: FCIP. FCIP is routable, and it is already part of the FC-BB-5 standard that will also include FCoE. Q. This is all good info, but this is all contingent on the back-end storage, inclusive of the storage array/SAN/NAS/server and disks/SSD/NVMe, actually being able to take advantage of the bandwidth. SAN vendors have been very slow to adopt these larger pipes. A. New technologies have adoption curves, and to be frank, adoption up the network speed curve has been slow since 10Gbps. A lot of that is due to disk technologies; they haven’t gotten that much faster in the last decade (bigger, yes, but not faster; it’s difficult to drive a big expensive pipe efficiently with slow drives.). Now with SSD and NVMe (and other persistent memories technologies to come), device latency and bandwidth have become a big issue. That will drive the adoption not only of fatter pipes for bandwidth, but also RDMA technologies to reduce latency. Q. What is a good source of performance metrics for data on CPU requirements for pushing/pulling data. This is in reference to the topic of “How can a server support 100/ Gb/s?” Q. Once 100Gb iSCSI is offloaded via special adapter cards, there should be no additional load imposed on the server than any other 100Gb link would require. Websites of independent testing companies (e.g. Demartek) should provide specific information in this regard. Q. What about iSCSI TLV A. This is a construct for placing iSCSI traffic on specific classes of service in a DCBX switch environment, which in turn is used when using a no-drop environment for iSCSI traffic; i.e., it’s used for “lossless iSCSI.” iSCSI TLV is a configuration setting, not a performance setting. All it does is allow an Ethernet switch to tell an adapter which Class of Service (COS) setting it’s using. However, this is actually not necessary for iSCSI, and in some cases [see e.g. https://blogs.cisco.com/datacenter/the-napkin-dialogues-lossless-iscsi] may actually be undesirable. iSCSI is built on TCP and it inherits the reliability features from the underlying TCP layer and does not need a DCBX infrastructure. In the case of hardware offloaded iSCSI, if a loss is observed in the system, the TCP retransmissions happen at silicon speeds without perturbing the host software and the resulting performance impact to the iSCSI traffic is insignificant. Further, Ethernet speeds have been rising rapidly, and have been overcoming any need for any type of traffic pacing. Q. How far away is standard-based NVMe over 100G Ethernet? Surely once 100GE starts to support block storage applications, is 128G FC now unattractive? A. NVMe over Fabrics (NVMe™-oF) is a protocol that is independent of the underlying transport network. That is, the protocol can accept any speed of the transport underneath. The key thing, then, is when you will find Operating System support for running the protocol with faster transport speeds. For instance, NVMe-oF over 10/25/40/50/100G Ethernet is available with RHEL7.4 and RHEL7.5. NVMe-oF over high-speed Fibre Channel will be dependent upon the adapter manufacturers’ schedule, as the qualification process is a bit more thorough. It may be challenging for FC to keep up with the Ethernet ecosystem, either in price, or with the speed of introducing new speed bumps, due to the much larger Ethernet ecosystem, but the end-to-end qualification process and ability to run multi-protocol deterministic storage with Fibre Channel networks often surpass raw speeds for practical use. Q. Please comment on the differences/similarities from the perspectives of troubleshooting issues. A. Both Fibre Channel and iSCSI use similar troubleshooting techniques. Tools such as traceroute, ping, and others (the names may be different, but the functionality is the same) are common across both network types. Fibre Channel’s troubleshooting tools are available at both the adapter level and the switch level, but since Fibre Channel has the concept of a fabric, many of the tools are system-wide. This allows for many common steps to be taken in one centralized management location. Troubleshooting of TCP/IP layer of iSCSI is no different than the rest of TCP/IP that the IT staff is used to and standard debugging tools work. Troubleshooting the iSCSI layer is very similar to FC since they both essentially appear as SCSI and essentially offer the same services. Q. Are TOE cards required today? A. TOE cards are not required today. TCP Offload Engines (TOEs) have both advantages and disadvantages. TOEs are more expensive than ordinary non-TOE Network Interface Chips (NICs). But, TOEs reduce the CPU overhead involved with network traffic. In some workloads, the extra CPU overhead of a normal NIC is not a problem, but in other heavy network workloads, the extra CPU overhead of the normal NIC reduces the amount of work that the system is able to perform, and the TOE provides an advantage (by freeing up extra CPU cycles to perform real work). For 10Gb, you can do without an offload card if you have enough host CPU cycles at your disposal, or in the case of a target, if you are not aggregating too many initiators, or are not using SSDs and do not need the high IOPs. At 40Gb and above, you will likely need offload assist in your system. Q. Are queue depths the same for both FC and iSCSI? Or are there any differences? A. Conceptually, the concepts of queue depth are the same.  At the SCSI layer, queue depth is the number of commands that a LUN may be concurrently operated on.  When that number of outstanding commands is achieved, the LUN refuses to accept any additional commands (any additional commands are failed with the status of TASK SET FULL). As a SCSI layer concept, the queue depth is not impacted by the transport type (iSCSI or FC).  There is no relationship between this value and concepts such as FC Buffer Credits, or iSCSI R2T (Ready to Transfer).  In addition, some adapters have a limit on the number of outstanding commands that may be present at the adapter layer. As a result of interactions between the queue depth limits of an individual LUN, and the queue depths limits of the adapters, hosts often allow for administrative management of the queue depth.  This management enables a system administrator to balance the IO load across LUNs so that a single busy LUN does not consume all available adapter resources.  In this case, the queue depth value set at the host is used by the host as a limiter of the number of concurrent outstanding commands (rather than waiting for the LUN to report the TASK SET FULL status) Again, management of these queue depth values is independent of the transport type.  However, on some hosts, the management of queue depth may appear different (for example, the commands used to set a maximum queue depth for a LUN on an FC transport vs. a LUN on an iSCSI transport may be different). Q. Is VMware happy more with FC or ISCSI, assuming almost the same speed? What about the network delay in contrast with the FC protocol which (is/was faster)? A. Unfortunately, we can’t comment on individual company’s best practice recommendations. However, you can refer to VMware’s Best Practices Guides for Fibre Channel and iSCSI: Best Practices for Fibre Channel Storage Best Practices For Running VMware vSphere on iSCSI  Q. Does iSCSI have true load balancing when Ethernet links are aggregated? Meaning the links share even loads? Can it be done across switches? I’m trying to achieve load balancing and redundancy at the same time. A. In most iSCSI software as well as hardware offload implementations load-balancing is supported using “multi-pathing” between a server and storage array which provides the ability to load-balance between paths when all paths are present and to handle failures of a path at any point between the server and the storage. Multi-pathing is also a de facto standard for load-balancing and high-availability in most Fibre Channel SAN environments. Q. Between FC and iSCSI, what are the typical workloads for one or the other? A. It’s important to remember that both Fibre Channel and iSCSI are block storage protocols. That is, for applications and workloads that require block storage, both Fibre Channel and iSCSI are relevant. From a connectivity standpoint, there is not much difference between the protocols at a high level – you have an initiator in the host, a switch in the middle, and a storage target at the other end. What becomes important, then, is topologies and architectures. Fibre Channel has a tightly-controlled oversubscription ratio, which is the number of hosts that we allow to access a single storage device (ratios can fall between, typically 4:1 to 20:1, depending on the application). iSCSI, on the other hand, has a looser relationship with oversubscription ratios, and can often be several dozen to 1 storage target. Q. For IPSEC for iSCSI, are there hardware offload capabilities to do the encryption/decryption in iSCSI targets available, or is it all done in software? A. Both hardware offload and software solutions are available. The tradeoffs are typically cost. With a software solution, you pay the cost in extra overhead in the CPU. If your CPU is not already busy, then that cost is very low (you may not even notice). If however, your CPU is busy, then the overhead of IPSEC will slow down your application from getting real work done. With the hardware offload solution, the cost is the extra $$ to purchase the hardware itself. On the upside, the newest CPUs offer new instructions for reducing the overhead of the software processing of various security protocols. Chelsio’s T6 offers integrated IPSec and TLS offload. This encryption capability can be used either for data-at-rest purposes (independent of the network link), or can be used in conjunction with the iSCSI (but requires a special driver). The limitation of the special driver will be removed in the next generation. Q. For any of the instruction participants: Are there any dedicated FC/iSCSI detailed installation guides (for dummies) you use or recommend from any vendor? A. No, there isn’t a single set of installation guides, as the best practices vary by storage and network vendor. Your storage or network vendor is the best place to start. Q. If iSCSI is used in a shared mode, how is the performance? A. Assuming this refers to sharing the link (pipe), iSCSI software and hardware implementations may be configured to utilize a portion of the aggregate link bandwidth without affecting performance. Q. Any info on FCoE (Fibre Channel over Ethernet)? A. There are additional talks on FCoE available from the SNIA site: On-demand webcasts: Blogs: In summary, FCoE is an encapsulation of the FC protocol into Ethernet packets that are carried over an Ethernet wire (without the use of TCP or IP). Q. What is FC’s typical network latency in relation to storage access and compare to iSCSI? A. For hardware-offloaded iSCSI, the latency is essentially the same since both stacks are processed at silicon speeds. Q. With 400Gbps Ethernet on the horizon, cloud providers and enterprises adopting Hyper-converged architectures based on Ethernet, isn’t it finally death of FC, at least in the mainstream, with exception of some niche verticals, which also still run mainframes? A. No, tape is still with us, and its demise has been predicted for a long time. There are still good reasons for investing in FC; for example, sunk costs, traditional environments and applications, and the other advantages explained in the presentation. The above said, the ubiquity of the Ethernet ecosystem which drives features/performance/lower-cost has been and will continue to be a major challenge for FC. And so, the FC vs. iSCSI debate continues, Ready for another “Great Debate?” So are we, register now for our next live webcast “File vs. Block vs. Object” on April 17th. We hope to see you there!                        

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Great Debates – Our Next Webcast Series

AlexMcDonald

Feb 5, 2018

title of post
The SNIA ESF is announcing a new series of webcasts, following our hugely successful “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcasts. Those focussed on explaining storage technology from the ground up, and while they were pretty all encompassing in their storage technology coverage, they didn’t compare or contrast similar technologies that perform broadly similar functions. That’s what we’re going to do in our new “Great Debates” series, the first of which was “FC vs. iSCSI.” It’s now available on-demand. I encourage you to check it out. It’s a great debate with experts who really know their stuff. But wait… FC vs. iSCSI? That “versus” sounds more like an argument than a discussion. Was there a winner? Was this a technology fight, with a clear-cut winner and a loser? The answer is an emphatic “No!”

“It is better to debate a question without settling it, than to settle a question without debating it,” said Joubert, a French essayist who lived in the Napoleonic era. It’s a sentiment that all of us in the SNIA ESF agree with wholeheartedly. This series isn’t about winners and losers; it’s about providing essential compare and contrast information between similar technologies. Mostly we’ll not settle any arguments as to which is better – but we’ll have debated the arguments, pointed out advantages and disadvantages, and made the case for specific use cases. We’re not looking for winners and losers, and we think we succeeded in this first debate on FC vs. iSCSI. Here are some of the attendee comments we got on that webcast:

Really nice and fair comparison …” “A good and frank discussion about the two technologies that don’t always need to compete!” “Very fair and balanced overview of the two protocols” Follow us on Twitter @SNIAESF to make sure you don’t miss announcements of our Great Debates series. To date, we’re planning: “Block vs File,” “File vs Object,” “RoCE, iWARP or iSER?” and more. If you have a debate suggestion, please let us know.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to AlexMcDonald