Author:

J Metz

Company : Rockport Networks

 
 
author

NVMe® over Fabrics for Absolute Beginners

J Metz

Feb 23, 2021

title of post

A while back I wrote an article entitled “NVMe™ for Absolute Beginners.” It seems to have resonated with a lot of people and it appears there might be a call for doing the same thing for NVMe® over Fabrics (NVMe-oF™).

This article is for absolute beginners. If you are a seasoned (or even moderately-experienced) technical person, this probably won’t be news to you. However, you are free (and encouraged!) to point people to this article who need Plain English™ to get started.

A Quick Refresher

Any time an application on a computer (or server, or even a consumer device like a phone) needs to talk to a storage device, there are a couple of things that you need to have. First, you need to have memory (like RAM), you need to have a CPU, and you also need to have something that can hold onto your data for the long haul (also called storage).

Another thing you need to have is a way for the CPU to talk to the memory device (on one hand) and the storage device (on the other). Thing is, CPUs talk a very specific language, and historically memory could speak that language, but storage could not.

For many years, things ambled along in this way. The CPU would talk natively with memory, which made it very fast but also was somewhat risky because memory was considered volatile. That is, if there was a power blip (or went out completely), any data in memory would be wiped out.

Not fun.

So, you wanted to have your data stored somewhere permanently, i.e., on a non-volatile medium. For many years, that meant hard disk drives (HDDs). This was great, and worked well, but didn’t really work fast.

Solid State Disks, or SSDs, changed all that. SSDs don’t have moving parts, which ultimately meant that you could get your data to and from the device faster. Much faster. However, as they got faster, it became clear that because the CPU didn’t talk to SSDs natively using the same language – and needed an adapter of some kind – we weren’t getting as fast as we wanted to be.

Enter Non-Volatile Memory Express (NVMe).

NVMe changed the nature of the game completely. For one, it removed the need for an adapter by allowing the CPU to talk to the storage natively. (In technical terms, what it did was allow the CPU to treat storage as if it were memory, with which it could already speak natively through a protocol called PCIe).

The second thing that was pretty cool was NVMe changed the nature of the relationship with storage from this:

… which was necessarily a 1:1 relationship, to this:

… which now meant that you could have more than one relationship between devices.

Very cool.

Since I wrote the “NVMe for Absolute Beginners” article a few years ago, the technology has taken off like wildfire. In only a few short years, there have been more NVMe storage drives shipped than the previous go-to technology (i.e., SATA).

By this point, there are many, many more articles written about NVMe than there were back then. Now, however, there are a lot of questions about what happens when you want to go outside of the range of PCIe.

NVMe® over Fabrics

Thing is, NVMe using PCIe is a technology that is best used inside a computer or server. PCIe is not generally regarded as a “fabric” technology.

So what is a “fabric” technology, and what makes it so special?

Like anything else, there are trade-offs when it comes to technology. The great thing about NVMe using PCIe is that it is wicked fast. The not-so-great thing about NVMe using PCIe is that it’s contained inside of a single computer. If you want to go outside of the computer, well, things get tricky… unless you do something special.

In general terms, a “fabric” is that “something special.” It’s not as easy as putting a storage device at the end of a wire and calling it quits. Oh no; there is so much more that needs to be done.

Any time you want to go outside of a computer or server, you need to be extra careful, because there are a lot more things that can go wrong. As in, an exponential number of things can go wrong. Not only do you need to try your best to make sure that things don’t go wrong in the first place, but you need to put systems in place to handle those problems when they do.

The good news is that there are a lot of choices when it comes to solving this problem. These storage networks have been the tried and true means by which people have handled storage solutions at scale. Technologies like Fibre Channel, Ethernet and InfiniBand have been used to connect servers and storage for years. Each one has its place, and each one has its fans (and with good reasons).

Because of this, there was no reason for the NVM Express group (the people behind the NVMe protocol) to create their own, new, fabric. Why re-invent the wheel? Instead, it was much better to use the battle-hardened technologies that were already available.

That’s why it’s called NVMe over Fabrics; we are simply using the NVMe protocol to piggy-back on top of networking technologies that already exist.

The Magic of Bindings

Imagine you’re rebuilding a Jeep. At a high level, you have two basic parts to a Jeep’s structure -  you have the chassis, and you have the body. As you can imagine, you can’t simply place the body on top of the chassis and start driving around. The body is going to eventually slide right off the chassis. Not exactly safe.

By the same logic, we can’t simply place the NVMe commands on top of a Fabric and expect, magically, that everything is going to work out all the time. Just like our Jeep body, there needs to be a strong connection with what happens underneath.

In NVMe-oF parlance, these are called bindings.

Bindings solve a number of problems. They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fibre Channel, InfiniBand, or various forms of Ethernet).

In particular, Bindings:

  • Define the establishment of a connection between NVMe and the transport/fabric
  • Restrict capabilities based upon what the transport fabric can (or can’t) do
  • Identify how NVMe is managed, administratively, using the transport/fabric
  • Establish requirements of size, authentication, type of information, etc., depending upon specific transport fabric methods

If you consider that with networking technology we think in terms of layers, the NVMe over Fabrics bindings sit on top of the transport fabrics layer, and it is the responsibility of the organizations who represent those transport fabrics to make sure that there are appropriate connections into the bindings from the other side.

For instance, the T11 Standards body is responsible for creating the changes to the Fibre Channel standards so that it can interact with the bindings appropriately, not just simply sling the NVMe commands from one side to the other.

You can find out more about how this works in the Fibre Channel example by watching the FCIA BrightTalk Webinar – Introduction to FC-NVMe by yours truly and Craig Carlson from Cavium, now Marvell).

Types of Fabrics

Now, I’ve given you an example of one type of Fabric that can be used for NVMe-oF, but Fibre Channel is not the only one. In fact, the magic of NVMe-oF is that you can choose one from a number of transport types:

At the top of the graphic you can see the host, and at the bottom you can see the storage. In the middle, you can see all of the different networking options that could be used.

Now, the interesting thing here, is that NVMe-oF is not those different types of transports. On the contrary, there are different technological bodies that work on those different transports. Instead, the magic of NVMe over Fabrics is the part represented by this:

And this:

To Bind or Not To Bind

Now, it’s important to know that just because the NVMe Express group defines the bindings format for NVMe-oF™ (the “™” is intentional, here), it doesn’t mean that this is the only way to do it. In fact, before the NVMe over Fabrics standard was ratified, there were quite a few companies who created their own forms of moving NVMe commands from one device to another.

Let me be absolutely clear here: there is nothing wrong with this!

Just because someone has a solution that isn’t standardized does not mean that they are doing something wrong or, worse, doing something nefarious. All it means is that they have figured out a different way to handle the means by which they send NVMe commands from one place to another.

However

It’s valuable to know whether or not a company is using a standardized version of NVMe over Fabrics, or whether someone is using a proprietary version of using a fabric to transport NVMe. The reason why it’s important is that storage is an end-to-end problem that needs solving, and you need to know how all of the parts fit together, and what (if any) kind of special attention needs to be made in order to make everything work together seamlessly.

For that reason, even though the acronym NVMe-oF™ looks funny[1], it is the official acronym for NVMe™ over Fabrics. There are a number of other popular acronyms, however, that have been used to represent networked NVMe:

  • NVM/f
  • NVMe/F
  • NVMf
  • NVMe-F
  • NVMe-oE (“over Ethernet”)
  • And so on…

Most of the time these are innocent and harmless mistakes, or simply affectations for a particular type of acronym. The problem comes when a vendor uses a different acronym because it looks like they are using a standardized version of the bindings when in fact it is not.

Taking advantage of people’s ignorance over the proper terminology in order to make your product look like it’s something it isn’t is, well, it’s uncool. You should especially beware if someone uses a trademark symbol (“™”) with an incorrect acronym.

Bottom Line

NVMe over Fabrics is a way of extending NVMe outside of a computer/server. It is more than simply slapping the commands onto a network, and it still helps to know the pros and cons of each transport fabric as it applies to what you need to do.

Remember, there is no such thing as a panacea for storage. Storage still has a very, very hard job:

Give me back the correct bit I asked you to hold on to for me.

Everything that happens inside of NVMe and NVMe-oF is designed to help make sure that happens.

If you are interested in learning more about NVMe, and NVMe over Fabrics, may I recommend some additional reading and videos (whichever you prefer) from the SNIA Educational Library.

[1] The reason why the acronym was chosen was because it was supposed to reflect the various forms of NVMe. For instance, the NVMe Management Interface is known as NVMe-MI™, and the group wished for there to be consistency across all the acronyms.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

NVMe® over Fabrics for Absolute Beginners

J Metz

Feb 23, 2021

title of post
A while back I write an article entitled “NVMe™ for Absolute Beginners.” It seems to have resonated with a lot of people and it appears there might be a call for doing the same thing for NVMe® over Fabrics (NVMe-oF™). This article is for absolute beginners. If you are a seasoned (or even moderately-experienced) technical person, this probably won’t be news to you. However, you are free (and encouraged!) to point people to this article who need Plain English™ to get started. A Quick Refresher Any time an application on a computer (or server, or even a consumer device like a phone) needs to talk to a storage device, there are a couple of things that you need to have. First, you need to have memory (like RAM), you need to have a CPU, and you also need to have something that can hold onto your data for the long haul (also called storage). Another thing you need to have is a way for the CPU to talk to the memory device (on one hand) and the storage device (on the other). Thing is, CPUs talk a very specific language, and historically memory could speak that language, but storage could not. For many years, things ambled along in this way. The CPU would talk natively with memory, which made it very fast but also was somewhat risky because memory was considered volatile. That is, if there was a power blip (or went out completely), any data in memory would be wiped out. Not fun. So, you wanted to have your data stored somewhere permanently, i.e., on a non-volatile medium. For many years, that meant hard disk drives (HDDs). This was great, and worked well, but didn’t really work fast. Solid State Disks, or SSDs, changed all that. SSDs don’t have moving parts, which ultimately meant that you could get your data to and from the device faster. Much faster. However, as they got faster, it became clear that because the CPU didn’t talk to SSDs natively using the same language – and needed an adapter of some kind – we weren’t getting as fast as we wanted to be. Enter Non-Volatile Memory Express (NVMe). NVMe changed the nature of the game completely. For one, it removed the need for an adapter by allowing the CPU to talk to the storage natively. (In technical terms, what it did was allow the CPU to treat storage as if it were memory, with which it could already speak natively through a protocol called PCIe). The second thing that was pretty cool was NVMe changed the nature of the relationship with storage from this:
… which was necessarily a 1:1 relationship, to this:
… which now meant that you could have more than one relationship between devices. Very cool. Since I wrote the “NVMe for Absolute Beginners” article a few years ago, the technology has taken off like wildfire. In only a few short years, there have been more NVMe storage drives shipped than the previous go-to technology (i.e., SATA). By this point, there are many, many more articles written about NVMe than there were back then. Now, however, there are a lot of questions about what happens when you want to go outside of the range of PCIe. NVMe® over Fabrics Thing is, NVMe using PCIe is a technology that is best used inside a computer or server. PCIe is not generally regarded as a “fabric” technology. So what is a “fabric” technology, and what makes it so special? Like anything else, there are trade-offs when it comes to technology. The great thing about NVMe using PCIe is that it is wicked fast. The not-so-great thing about NVMe using PCIe is that it’s contained inside of a single computer. If you want to go outside of the computer, well, things get tricky… unless you do something special. In general terms, a “fabric” is that “something special.” It’s not as easy as putting a storage device at the end of a wire and calling it quits. Oh no; there is so much more that needs to be done. Any time you want to go outside of a computer or server, you need to be extra careful, because there are a lot more things that can go wrong. As in, an exponential number of things can go wrong. Not only do you need to try your best to make sure that things don’t go wrong in the first place, but you need to put systems in place to handle those problems when they do. The good news is that there are a lot of choices when it comes to solving this problem. These storage networks have been the tried and true means by which people have handled storage solutions at scale. Technologies like Fibre Channel, Ethernet and InfiniBand have been used to connect servers and storage for years. Each one has its place, and each one has its fans (and with good reasons).
Because of this, there was no reason for the NVM Express group (the people behind the NVMe protocol) to create their own, new, fabric. Why re-invent the wheel? Instead, it was much better to use the battle-hardened technologies that were already available. That’s why it’s called NVMe over Fabrics; we are simply using the NVMe protocol to piggy-back on top of networking technologies that already exist. The Magic of Bindings Imagine you’re rebuilding a Jeep. At a high level, you have two basic parts to a Jeep’s structure – you have the chassis, and you have the body. As you can imagine, you can’t simply place the body on top of the chassis and start driving around. The body is going to eventually slide right off the chassis. Not exactly safe. By the same logic, we can’t simply place the NVMe commands on top of a Fabric and expect, magically, that everything is going to work out all the time. Just like our Jeep body, there needs to be a strong connection with what happens underneath. In NVMe-oF parlance, these are called bindings. Bindings solve a number of problems. They are the glue that holds the NVMe communication language to the underlying fabric transport (whether it is Fibre Channel, InfiniBand, or various forms of Ethernet).
In particular, Bindings:
  • Define the establishment of a connection between NVMe and the transport/fabric
  • Restrict capabilities based upon what the transport fabric can (or can’t) do
  • Identify how NVMe is managed, administratively, using the transport/fabric
  • Establish requirements of size, authentication, type of information, etc., depending upon specific transport fabric methods
If you consider that with networking technology we think in terms of layers, the NVMe over Fabrics bindings sit on top of the transport fabrics layer, and it is the responsibility of the organizations who represent those transport fabrics to make sure that there are appropriate connections into the bindings from the other side. For instance, the T11 Standards body is responsible for creating the changes to the Fibre Channel standards so that it can interact with the bindings appropriately, not just simply sling the NVMe commands from one side to the other. You can find out more about how this works in the Fibre Channel example by watching the FCIA BrightTalk Webinar – Introduction to FC-NVMe by yours truly and Craig Carlson from Cavium, now Marvell).
Types of Fabrics Now, I’ve given you an example of one type of Fabric that can be used for NVMe-oF, but Fibre Channel is not the only one. In fact, the magic of NVMe-oF is that you can choose one from a number of transport types:
At the top of the graphic you can see the host, and at the bottom you can see the storage. In the middle, you can see all of the different networking options that could be used. Now, the interesting thing here, is that NVMe-oF is not those different types of transports. On the contrary, there are different technological bodies that work on those different transports. Instead, the magic of NVMe over Fabrics is the part represented by this:
And this:
To Bind or Not To Bind
Now, it’s important to know that just because the NVMe Express group defines the bindings format for NVMe-oF™ (the “™” is intentional, here), it doesn’t mean that this is the only way to do it. In fact, before the NVMe over Fabrics standard was ratified, there were quite a few companies who created their own forms of moving NVMe commands from one device to another. Let me be absolutely clear here: there is nothing wrong with this! Just because someone has a solution that isn’t standardized does not mean that they are doing something wrong or, worse, doing something nefarious. All it means is that they have figured out a different way to handle the means by which they send NVMe commands from one place to another. However… It’s valuable to know whether or not a company is using a standardized version of NVMe over Fabrics, or whether someone is using a proprietary version of using a fabric to transport NVMe. The reason why it’s important is that storage is an end-to-end problem that needs solving, and you need to know how all of the parts fit together, and what (if any) kind of special attention needs to be made in order to make everything work together seamlessly. For that reason, even though the acronym NVMe-oF™ looks funny[1], it is the official acronym for NVMe™ over Fabrics. There are a number of other popular acronyms, however, that have been used to represent networked NVMe:
  • NVM/f
  • NVMe/F
  • NVMf
  • NVMe-F
  • NVMe-oE (“over Ethernet”)
  • And so on…
Most of the time these are innocent and harmless mistakes, or simply affectations for a particular type of acronym. The problem comes when a vendor uses a different acronym because it looks like they are using a standardized version of the bindings when in fact it is not. Taking advantage of people’s ignorance over the proper terminology in order to make your product look like it’s something it isn’t is, well, it’s uncool. You should especially beware if someone uses a trademark symbol (“™”) with an incorrect acronym. Bottom Line NVMe over Fabrics is a way of extending NVMe outside of a computer/server. It is more than simply slapping the commands onto a network, and it still helps to know the pros and cons of each transport fabric as it applies to what you need to do. Remember, there is no such thing as a panacea for storage. Storage still has a very, very hard job: Give me back the correct bit I asked you to hold on to for me. Everything that happens inside of NVMe and NVMe-oF is designed to help make sure that happens. If you are interested in learning more about NVMe, and NVMe over Fabrics, may I recommend some additional reading and videos (whichever you prefer): [1] The reason why the acronym was chosen was because it was supposed to reflect the various forms of NVMe. For instance, the NVMe Management Interface is known as NVMe-MI™, and the group wished for there to be consistency across all the acronyms.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Security & Privacy Regulations: An Expert Q&A

J Metz

Sep 24, 2020

title of post
Last month the SNIA Networking Storage Forum continued its Storage Networking Security Webcast series with a presentation on Security & Privacy Regulations. We were fortunate to have security experts, Thomas Rivera and Eric Hibbard, explain the current state of regulations related to data protection and data privacy. If you missed it, it’s available on-demand. Q. Do you see the US working towards a national policy around privacy or is it going to stay state-specified? A.  This probably will not happen anytime soon due to political reasons. Having a national policy on privacy is not necessarily a good thing, depending on your state. Such a policy would likely have a preemption clause and could be used to diminish requirements from states like CA and MA. Q. Can you quickly summarize the IoT law? Does it force IoT manufactures to continually support IoT devices (ie. security patches) through its lifetime? A. The California IoT law is vague, in that it states that devices are to be equipped with “reasonable” security feature(s) that are all of the following:
  • Appropriate to the nature and function of the device
  • Appropriate to the information it may collect, contain, or transmit
  • Designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure
This is sufficiently vague that it may be left to lawyers to determine whether requirements have been met. It is also important to remember IoT is a nickname because the law applies to all “Connected devices” (i.e., any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address). It also states that if a connected device is equipped with a means for authentication outside a LAN, either a preprogrammed password that is unique to each device manufactured or a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time is required. Q. You didn’t mention Brexit – to date the plan is to follow GDPR but it may change, any thoughts? A. British and European Union courts recognize a fundamental right to data privacy under Article 8 of the binding November 1950, European Convention on Human Rights (ECHR). In addition, Britain had to implement GDPR as a member nation. Post-Brexit, the UK will not have to continue implementing GDPR as the other member countries in the EU. However, Britain will be subject to EU data transfer approval as a “third country” like the US. Speculation has been that Britain would attempt a “Privacy Shield” agreement modeled after the arrangement between the United States and the European Union. With the recent Court of Justice of the European Union issuance of a judgment declaring as “invalid” the European Commission’s Decision (EU) 2016/1250 of 12 July 2016 on the adequacy of the protection provided by the EU-U.S. Privacy Shield (i.e., the EU-U.S. Privacy Shield Framework is no longer a valid mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States), such an approach is now unlikely. It is not clear what Britain will do at this point and, as with many elements of Brexit, Britain could find itself digitally isolated from the EU if data privacy is not handled as part of the separation agreement. Q. In thinking of privacy – what are your thoughts on encryption being challenged? By EARN IT act/LAED act, etc. It seems like that is going against a nation-wide privacy movement, if there is one. A. The US Government (and many others) have a love/hate relationship with encryption. They want everyone to use it to protect sensitive assets, unless you are a criminal and then they want you to do everything in the clear so they don’t have to work too hard to catch and prosecute you…or simply persecute you. The back-door argument is amusing because most governments don’t have the ability to prevent something like this from being exploited by attackers (non-Government types). If the US Government can’t secure its own personnel records, which potentially exposes every civil servant along with his/her families and colleagues to attacks, how could they protect something as important as a back-door? If you want to learn more about encryption, watch the Encryption 101 webcast we did as part of this series.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

Sep 14, 2020

title of post
Last month, the SNIA Cloud Storage Technologies Initiative was fortunate to have artificial intelligence (AI) expert, Parviz Peiravi, explore the topic of AI Operations (AIOps) at our live webcast, “IT Modernization with AIOps: The Journey.” Parviz explained why the journey to cloud native and microservices, and the complexity that comes along with that, requires a rethinking of enterprise architecture. If you missed the live presentation, it’s now available on demand together with the webcast slides. We had some interesting questions from our live audience. As promised, here are answers to them all: Q. Can you please define the Data Lake and how different it is from other data storage models?           A. A data lake is another form of data repository with specific capability that allows data ingestion from different sources with different data types (structured, unstructured and semi-structured), data as is and not transformed. The data transformation process Extract, Load, Transform (ELT) follow schema on read vs. schema on write Extract, Transform and Load (ETL) that has been used in traditional database management systems. See the definition of data lake in the SNIA Dictionary here. In 2005 Roger Mougalas coined the term Big Data, it refers to large volume high velocity data generated by the Internet and billions of connected intelligent devices that was impossible to store, manage, process and analyze by traditional database management and business intelligent systems. The need for a high-performance data management systems and advanced analytics that can deal with a new generation of applications such as Internet of things (IoT), real-time applications and, streaming apps led to development of data lake technologies. Initially, the term “data lake” was referring to Hadoop Framework and its distributed computing and file system that bring storage and compute together and allow faster data ingestion, processing and analysis. In today’s environment, “data lake” could refer to both physical and logical forms: a logical data lake could include Hadoop, data warehouse (SQL/No-SQL) and object-based storage, for instance. Q. One of the aspects of replacing and enhancing a brownfield environment is that there are different teams in the midst of different budget cycles. This makes greenfield very appealing. On the other hand, greenfield requires a massive capital outlay. How do you see the percentages of either scenario working out in the short term? A. I do not have an exact percentage, but the majority of enterprises using a brownfield implementation strategy have been in place for a long time. In order to develop and deliver new capabilities with velocity, greenfield approaches are gaining significant traction. Most of the new application development based on microservices/cloud native is being implemented in greenfield to reduce the risk and cost using cloud resources available today in smaller scale at first and adding more resources later. Q. There is a heavy reliance upon mainframes in banking environments. There’s quite a bit of error that has been eliminated through decades of best practices. How do we ensure that we don’t build in error because these models are so new? A. The compelling reasons behind mainframe migration – beside the cost – is ability to develop and deliver new application capabilities, business services and making data available to all other applications. There are four methods for mainframe migration:
  • Data migration only
  • Re-platforming
  • Re-architecting
  • Re-factoring
Each approach provides enterprises different degrees of risk and freedom.  Applying best practices to both application design/development and operational management, is the best way to ensure smooth application migration from a monolith to a new distributed environment such as microservices/cloud native. Data architecture plays a pivotal role in the design process in addition to applying Continuous Integration and Continuous Delivery (CI/CD) process. Q. With the changes into a monolithic data lake, will we be seeing different data lakes with different security parameters, which just means that each lake is simply another data repository? A. If we follow a domain-driven design principal, you could have multiple data lakes with specific governance and security policies appropriate to that domain. Multiple data lakes could be accessed through data virtualization to mimic a monolithic data lake; this approach is based on a logical data lake architecture. Q. What’s the difference between multiple data lakes and multiple data repositories? Isn’t it just a matter of quantity? A. Looking from Big Data perspective, a data lake is not only stored data but also provides capabilities to process and analyze data (e.g. Hadoop framework/HDFS). New trends are emerging that separate storage and compute (e.g., disaggregated storage architectures) hence some vendors use the term “data lake” loosely and offer only storage capability, while others provide both storage and data processing capabilities as an integrated solution. What is more important than the definition of data lake is your usage and specific application requirements to determine which solution is a good fit for your environment.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ready for a Lesson on Security & Privacy Regulations?

J Metz

Jul 10, 2020

title of post

Worldwide, regulations are being promulgated and aggressively enforced with the intention of protecting personal data. These regulatory actions are being taken to help mitigate exploitation of this data by cybercriminals and other opportunistic groups who have turned this into a profitable enterprise. Failure to meet these data protection requirements puts individuals at risk (e.g., identity theft, fraud, etc.), as well as subjecting organizations to significant harm (e.g., legal penalties).

The SNIA Networking Storage Forum (NSF) is going to dive into this topic at our Security & Privacy Regulations webcast on July 28, 2020. We are fortunate to have experts, Eric Hibbard and Thomas Rivera, share their expertise in security standards, data protection and data privacy at this live event. 

This webcast will highlight common privacy principles and themes within key privacy regulations. In addition, the related cybersecurity implications will be explored. We'll also probe a few of the recent regulations/laws to outline interesting challenges due to over and under-specification of data protection requirements (e.g., "reasonable" security).

Attendees will have a better understanding of:

  • How privacy and security is characterized
  • Data retention and deletion requirements
  • Core data protection requirements of sample privacy regulations from around the globe
  • The role that security plays with key privacy regulations
  • Data breach implications and consequences

This webcast is part of our Storage Networking Security Webcast Series. I encourage you to watch the presentations we've done to date on:

And I hope you will register today and join us on July 28th for what is sure to be an interesting look into the history, development and impact of these regulations.   

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Ready for a Lesson on Security & Privacy Regulations?

J Metz

Jul 10, 2020

title of post
Worldwide, regulations are being promulgated and aggressively enforced with the intention of protecting personal data. These regulatory actions are being taken to help mitigate exploitation of this data by cybercriminals and other opportunistic groups who have turned this into a profitable enterprise. Failure to meet these data protection requirements puts individuals at risk (e.g., identity theft, fraud, etc.), as well as subjecting organizations to significant harm (e.g., legal penalties). The SNIA Networking Storage Forum (NSF) is going to dive into this topic at our Security & Privacy Regulations webcast on July 28, 2020. We are fortunate to have experts, Eric Hibbard and Thomas Rivera, share their expertise in security standards, data protection and data privacy at this live event.  This webcast will highlight common privacy principles and themes within key privacy regulations. In addition, the related cybersecurity implications will be explored. We’ll also probe a few of the recent regulations/laws to outline interesting challenges due to over and under-specification of data protection requirements (e.g., “reasonable” security). Attendees will have a better understanding of:
  • How privacy and security is characterized
  • Data retention and deletion requirements
  • Core data protection requirements of sample privacy regulations from around the globe
  • The role that security plays with key privacy regulations
  • Data breach implications and consequences
This webcast is part of our Storage Networking Security Webcast Series. I encourage you to watch the presentations we’ve done to date on: And I hope you will register today and join us on July 28th for what is sure to be an interesting look into the history, development and impact of these regulations.   

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

Jun 18, 2020

title of post

Key management focuses on protecting cryptographic keys from threats and ensuring keys are available when needed. And it’s no small task. That's why the SNIA Networking Storage Forum (NSF) invited key management and encryption expert, Judy Furlong, to present a “Key Management 101” session as part our Storage Networking Security Webcast Series. If you missed the live webcast, I encourage you to watch it on-demand as it was highly-rated by attendees. Judy answered many key management questions during the live event, here are answers to those, as well as the ones we did not have time to get to.

Q. How are the keys kept safe in local cache?

A. It depends on the implementation. 
Options include:  1. Only storing
wrapped keys (each key individually encrypted with another key) in the cache. 2.
Encrypting the entire cache content with a separate encryption. In either case,
one needs to properly protect/manage the wrapping (KEK) key or Cache master key.

Q. Rotate key question – Self-encrypting Drive (SED) requires
permanent encryption key. How is rotation is done?

A. It is the Authentication Encryption Key used to access
(and protect the Data (Media) Encryption Key) that can be rotated. If you
change/rotate the DEK you destroy the data on the disk.

Q. You may want to point out that many people use
“FIPS” for FIPS 140, which isn’t strictly correct, as there are
numerous FIPS standards.

A. Yes that is true that many people refer to FIPS 140 as just FIPS which as noted is incorrect.  There are many Federal Information Process
Standards (FIPS).  That is why when I
present/write something I am careful to always add the appropriate FIPS
reference number (e.g. FIPS 140, FIPS 186, FIPS 201 etc.).

Q. So is the math for M of N
key sharing the same as used for object store?

A. Essentially yes, it’s the same mathematical concepts that
are being used.  However, the object
store approach uses a combination of data splitting and key splitting to allow
encrypted data to be stored across a set of cloud providers.

Q. According to the size of the data, this should be the
key, so for 1 TB should a 1T key be used? (
Slide
12
)

A. No, encrypting 1TB of data doesn’t mean that the key has to be
that long. Most data encryption (at rest and in flight) use symmetric
encryption like AES which is a block cipher. In block ciphers the data that is
being encrypted is broken up into blocks of specific size in order to be
processed by that algorithm. For a good overview of block ciphers see the Encryption 101 webcast.

Q. What is the maximum
lifetime of a certificate?

A. Maximum certificate validity (e.g. certificate lifetime)
varies based on regulations/guidance, organizational policies, application or
purpose for which certificate is used, etc. Certificates issued to humans for
authentication or digital signature or to common applications like web
browsers, web services, S/MIME email client, etc. tend to have validities of 1-2
years. CA certificates have slightly longer validities in the 3-5-year
range. 

Q. In data center applications, why not just use AEK as
DEK for SED?

A. Assuming
that AEK is Authentication Encryption Key — A defense in-depth strategy is
taken in the design of SEDs where the DEK (or MEK) is a key that is generated
on the drive and never leaves the drive. The MEK is protected by an AEK. This
AEK is externalized from the drive and needs to be provided by the
application/product that is accessing the SED in order to unlock the SED and
take advantage of its capabilities. 

Using separate keys follows the principles of only using a key for one purpose
(e.g. encryption vs. authentication).  It
also reduces the attack surface for each key. If an attacker obtains an AEK
they also need to have access to the SED it belongs to as well as the
application used to access that SED.

Q. Does NIST require
“timeframe” to rotate key?

A.NIST recommendations for the cryptoperiod of keys used for a
range of purposes may be found in section 5.3.6 of NIST SP800-57 Part 1 R5.

Q. Does D@RE use symmetric or asymmetric
encryption?

A.There are many Data at Rest (D@RE) implementations, but the
majority of the D@RE implementations within the storage industry (e.g.
controller based, Self-Encrypting Drives (SEDs)) symmetric encryption is used.
For more information about D@RE implementations, check out the Storage
Security Series: Data-at-Rest webcast
.

Q. In the TLS example shown, where does the “key
management” take place?

There
are multiple places in the TLS handshake example where different key management
concepts discussed in the webinar are leveraged:

  • In steps 3 and 5 the client and server exchange their public key
    certificates (example of asymmetric cryptography/certificate management)
  • In steps 4 and 6 the client and server validate each other’s
    certificates (example of certificate path validation — part of key management)
  • In step 5 the client creates and sends pre-master secret (example
    of key agreement)
  • In step 7 the client and server use this pre-master secret and
    other information to calculate the same symmetric key that will be used to
    encrypt the communication channel (example of key derivation).

Remember
I said this was part of the Storage Networking Security Webcast Series? Check
out the other webcasts we’ve done to date as well as what’s coming up

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post

There's a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised.

It's the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management.

Key (see what I did there?) topics will include:

  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models

In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

J Metz

May 12, 2020

title of post
There’s a lot that goes into effective key management. In order to properly use cryptography to protect information, one has to ensure that the associated cryptographic keys themselves are also protected. Careful attention must be paid to how cryptographic keys are generated, distributed, used, stored, replaced and destroyed in order to ensure that the security of cryptographic implementations is not compromised. It’s the next topic the SNIA Networking Storage Forum is going to cover in our Storage Networking Security Webcast Series. Join us on June 10, 2020 for Key Management 101 where security expert and Dell Technologies distinguished engineer, Judith Furlong, will introduce the fundamentals of cryptographic key management. Key (see what I did there?) topics will include:
  • Key lifecycles
  • Key generation
  • Key distribution
  • Symmetric vs. asymmetric key management, and
  • Integrated vs. centralized key management models
In addition, Judith will also dive into relevant standards, protocols and industry best practices. Register today to save your spot for June 10th we hope to see you there.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to J Metz