Author:

J Metz

Company : Rockport Networks

 
 
author

Would You Like Some Rosé with Your iSCSI?

J Metz

Feb 3, 2017

title of post
Would you like some rosé with your iSCSI? I'm guessing that no one has ever asked you that before. But we at the SNIA Ethernet Storage Forum like to get pretty colorful in our "Everything You Wanted To Know about Storage But Were Too Proud To Ask" webcast series as we group common storage terms together by color rather than by number. In our next live webcast, Part Rosé – The iSCSI Pod, we will focus entirely on iSCSI, one of the most used technologies in data centers today. With the increasing speeds for Ethernet, the technology is more and more appealing because of its relative low cost to implement. However, like any other storage technology, there is more here than meets the eye. We've convened a great group of experts from Cisco, Mellanox and NetApp who will start by covering the basic elements to make your life easier if you are considering using iSCSI in your architecture, diving into:
  • iSCSI definition
  • iSCSI offload
  • Host-based iSCSI
  • TCP offload
Like nearly everything else in storage, there is more here than just a protocol. I hope you'll register today to join us on March 2nd and learn how to make the most of your iSCSI solution. And while we won't be able to provide the rosé wine, our panel of experts will be on-hand to answer your questions. Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Would You Like Some Rosé with Your iSCSI?

J Metz

Feb 3, 2017

title of post

Would you like some rosé with your iSCSI? I’m guessing that no one has ever asked you that before. But we at the SNIA Ethernet Storage Forum like to get pretty colorful in our “Everything You Wanted To Know about Storage But Were Too Proud To Ask” webcast series as we group common storage terms together by color rather than by number.

In our next live webcast, Part Rosé – The iSCSI Pod, we will focus entirely on iSCSI, one of the most used technologies in data centers today. With the increasing speeds for Ethernet, the technology is more and more appealing because of its relative low cost to implement. However, like any other storage technology, there is more here than meets the eye.

We’ve convened a great group of experts from Cisco, Mellanox and NetApp who will start by covering the basic elements to make your life easier if you are considering using iSCSI in your architecture, diving into:

  • iSCSI definition
  • iSCSI offload
  • Host-based iSCSI
  • TCP offload

Like nearly everything else in storage, there is more here than just a protocol. I hope you’ll register today to join us on March 2nd and learn how to make the most of your iSCSI solution. And while we won’t be able to provide the rosé wine, our panel of experts will be on-hand to answer your questions.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Buffers, Queues, and Caches, Oh My!

J Metz

Jan 18, 2017

title of post
Buffers and Queues are part of every data center architecture, and a critical part of performance – both in improving it as well as hindering it. A well-implemented buffer can mean the difference between a finely run system and a confusing nightmare of troubleshooting. Knowing how buffers and queues work in storage can help make your storage system shine. However, there is something of a mystique surrounding these different data center components, as many people don't realize just how they're used and why. Join our team of carefully-selected experts on February 14th in the next live webcast in our "Too Proud to Ask" series, "Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod" where we'll demystify this very important aspect of data center storage. You'll learn:
  • What are buffers, caches, and queues, and why you should care about the differences?
  • What's the difference between a read cache and a write cache?
  • What does "queue depth" mean?
  • What's a buffer, a ring buffer, and host memory buffer, and why does it matter?
  • What happens when things go wrong?
These are just some of the topics we'll be covering, and while it won't be exhaustive look at buffers, caches and queues, you can be sure that you'll get insight into this very important, and yet often overlooked, part of storage design. Register today and spend Valentine's Day with our experts who will be on-hand to answer your questions on the spot! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Buffers, Queues, and Caches, Oh My!

J Metz

Jan 18, 2017

title of post

Buffers and Queues are part of every data center architecture, and a critical part of performance – both in improving it as well as hindering it. A well-implemented buffer can mean the difference between a finely run system and a confusing nightmare of troubleshooting. Knowing how buffers and queues work in storage can help make your storage system shine.

However, there is something of a mystique surrounding these different data center components, as many people don’t realize just how they’re used and why. Join our team of carefully-selected experts on February 14th in the next live webcast in our “Too Proud to Ask” series, “Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod” where we’ll demystify this very important aspect of data center storage. You’ll learn:

  • What are buffers, caches, and queues, and why you should care about the differences?
  • What’s the difference between a read cache and a write cache?
  • What does “queue depth” mean?
  • What’s a buffer, a ring buffer, and host memory buffer, and why does it matter?
  • What happens when things go wrong?

These are just some of the topics we’ll be covering, and while it won’t be exhaustive look at buffers, caches and queues, you can be sure that you’ll get insight into this very important, and yet often overlooked, part of storage design.

Register today and spend Valentine’s Day with our experts who will be on-hand to answer your questions on the spot!

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Clearing Up Confusion on Common Storage Networking Terms

J Metz

Jan 12, 2017

title of post
Do you ever feel a bit confused about common storage networking terms? You're not alone. At our recent SNIA Ethernet Storage Forum webcast "Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve," we had experts from Cisco, Mellanox and NetApp explain the differences between:
  • Channel vs. Busses
  • Control Plane vs. Data Plane
  • Fabric vs. Network
If you missed the live webcast, you can watch it on-demand. As promised, we're also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms. And remember, the "Everything You Wanted To Know About Storage But Were Too Proud To Ask" is a webcast series with a "colorfully-named pod" for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th. Q. Why do we have Fibre and Fiber A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.   While a common story is that the "Fibre" spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling - "Fibre". Q. Will OpenStack change all the rules of the game? A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure. Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage. A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you. Q. As I've heard that networks use stateless protocols, would FC do the same? A.  Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful.  In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

Q. Where do CIFS/SMB come into the picture? A. CIFFS/SMB is part of a network stack.   We need to have a separate talk about network stacks and their layers.   In this presentation, we were talking primarily about the physical layer of the networks and fabrics.   To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.   In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.   In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).   In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.   In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB) Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Clearing Up Confusion on Common Storage Networking Terms

J Metz

Jan 12, 2017

title of post

Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:

  • Channel vs. Busses
  • Control Plane vs. Data Plane
  • Fabric vs. Network

If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.

And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.

Q. Why do we have Fibre and Fiber

A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.  While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.

Q. Will OpenStack change all the rules of the game?

A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.

Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.

A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.

Q. As I’ve heard that networks use stateless protocols, would FC do the same?

A. Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful. In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

Q. Where do CIFS/SMB come into the picture?

A. CIFFS/SMB is part of a network stack.  We need to have a separate talk about network stacks and their layers.  In this presentation, we were talking primarily about the physical layer of the networks and fabrics.  To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.  In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.  In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).  In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.  In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Storage Basics Q&A and No One’s Pride was Hurt

J Metz

Nov 7, 2016

title of post

In the first of our “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Chartreuse,” we covered the storage basics to break down the entire storage picture and identify the places where most of the confusion falls. It was a very well attended event and I’m happy to report, everyone’s pride stayed intact! We got some great questions from the audience, so as promised, here are our answers to all of them:

Q. What is parity? What is XOR?

A. In RAID, there are generally two kinds of data that are stored: the actual data and the parity data. The actual data is obvious; parity data is information about the actual data that you can use to reconstruct it if something goes wrong.

It’s important to note that this is not simply a copy of A and B, but rather a logical operation that is applied to the data. Commonly for RAID (other than simple mirroring) the method used is called an exclusive or, or XOR for short. The XOR function outputs true only when inputs differ (one is true, the other is false).

There’s a neat feature about XOR, and the reason it’s used by RAID. Calculate the value A XOR B (let’s call it AxB). Here’s an example on a pair of bytes.

A                                  10011100

B                                  01101100

A XOR B is AxB              11110000

Store all three values on separate disks. Now, if we lose A or B, we can use the fact that AxB XOR B is equal to A, and AxB XOR A is equal to B. For example, for A;

B                                  01101100

AxB                              11110000

A XOR AxB is A              10011100

We’ve regenerated the A we lost. (If we lose the parity bits, they can just be reconstructed from A and B.)

Q. What is common notation for RAID? I have seen RAID 4+1, and RAID (4,1). In the past, I thought this meant a total of 5 disks, but in your explanation it is only 4 disks.

A. RAID is notated by levels, which is determined by the way in which data is laid out on disk drives (there are always at least two). When attempting to achieve fault tolerance, there is always a trade-off between performance and capacity. Such is life.

There are 4 common RAID levels in use today (there are others, but these are the most common): RAID 0, RAID 1, RAID 5, and RAID 6. As a quick reminder from the webinar (you can see pictures of these in action there):

  • RAID 0: Data is striped across the disks without any parity. Very fast, but very unsafe (if you lose one, you lose all)
  • RAID 1: Data is mirrored between disks without any parity. Slowest, but you have an exact copy of the data so there is no need to recalculate anything to reconstruct the data.
  • RAID 5: Data is striped across multiple disks, and the parity is striped across multiple disks. Often seen as the best compromise: Fast writes and good safety net. Can withstand one disk loss without losing data.
  • RAID 6: Data is striped across multiple disks, and two parity bits are stored on all the disks. Same advantages of RAID 5, except now you can lose 2 drives before data loss.

Now, if you have enough disks, it is possible to combine RAID levels. You can, for instance, have four drives that combine mirroring and striping. In this case, you can have two sets of drives that are mirrored to each other, and the data is striped to each of those sets. That would be RAID 1+0, or often called RAID 10. Likewise, you can have two sets of RAID 5 drives, and you could stripe or mirror to each of those sets, and it would be RAID 50 or RAID 51, respectively.

Erasure Coding has a different notation, however. It does not use levels like RAID; instead, EC identifies the number of data bits and the number of parity bits.

So, with EC, you take a file or object and split it into ‘k’ blocks of equal size. Then, you take those k blocks and generate n blocks of the same size, such that any k out of n blocks suffice to reconstruct the original file. This results in a (n,k) notation for EC.

Since RAID is a subset of EC, RAID6 is the equivalent of EC or RAID(n,2) or n data disks and 2 parity disks. RAID(4,1) is RAID5 with 4 data and 1 parity, and so on.

Q. Which RAIDs are classified/referred to as EC? I have often heard people refer to RAID 5/6 as EC. Is this only limited to 5/6?

A. All RAID levels are types of EC. The math is slightly different; traditional RAID uses XOR, and EC uses Galois Fields or polynomial arithmetic.

Q. What’s the advantage of RAID5 over RAID1?

A. As noted above, there is a tradeoff between the amount of capacity that you need in order to stay fault tolerant, and the performance you wish to have in any system.

RAID 1 is a mirrored system, where you have a single block of data being written twice – one to each disk. This is done in parallel, so it doesn’t take any extra time to do the write, but there’s no speed-up either. One advantage, however, is that if a disk fails there is no need to perform any logical calculations to reconstruct data – you already have a copy of the intact data.

RAID 5 is more distributed. That is, blocks of data are written to multiple disks simultaneously, along with a parity block. That is, you are breaking up the writing obligations across multiple disks, as well as sending parity data across multiple disks. This significantly speeds up the write process, but more importantly it also distributes the recovery capabilities as well so that any disk can fail without losing data.

Q. So RAID improves WRITES? I guess because it breaks the data into smaller pieces that can be written in parallel. If this is true, then why will READ not benefit from RAID? Isn’t it that those pieces can be read and re-combined into a larger piece from parallel sources would be faster?

A. RAID and the “striping” of IO can improve writes by reducing serialization by allowing us to write anywhere. But a specific block can only be read from the disk it was written to, and if we’re already reading or writing to that disk and it’s busy – we must wait.

Q. Why is EC better for object stores than RAID?

A. Because there’s more redundancy, EC can be made to operate across unreliable and less responsive links, and at potentially geographic scales.

Q: Can you explain about the “RAID Penalty?” I’ve heard it called “Write Penalty” or “Read before Write penalty.”

A. When updating data that’s already been written to disk, there’s a requirement to recalculate the parity data used by RAID. For example, if we update a single byte in a block, we need to read all the blocks, recalculate the parity, and write back the updated data block and the parity block (twice in the case of dual parity RAID6).

There are some techniques that can be used to improve the performance impact. For example, some systems don’t update blocks in place, but use pointer-based systems and only write new blocks. This technique is used by flash-based SSDs as the write size is often 256KB or larger. This can be done in the drive itself, or by the RAID or storage system software. It is very important to avoid when using Erasure Coding as there are so many data blocks and parity blocks to recalculate and rewrite that it would become prohibitive to do an update.

Q. What is the significance of RAIN? We have not heard much about it.

A.A Redundant Array of Independent Nodes works under the same principles of RAID – that is, each node is treated as a failure domain that must be avoided as a Single Point of Failure (SPOF).Where as RAID maintains an understanding of data placement on individual drives within a node, RAIN maintains an understanding of data placement on nodes (that contain drives) within a storage environment.

Q. Is host same as node?

A. At its core, a “node” is an endpoint. So, a host can be a node, but so can a storage device at the other end of the wire.

Q. Does it really matter what Erasure Coding (EC) technologies are named or is EC just EC?

A. A. Erasure Coding notation refers to the level of resilience involved. This notation underscores not only the write patterns for storage of data, but also the mechanisms necessary for recovery. What ‘matters’ really will depend upon the level of involvement for those particular tasks.

Q. Is the Volume Manager concept related to Logical Unit Numbering (LUNs)?

A. It can be. A volume manager is an abstraction layer that allows a host operating system to create a Volume out of one or more media locations. These locations can be either logical or physical. A LUN is an aggregation of media on the target/storage side. You can use a Volume Manager to create a single, logical volume out of multiple LUNs, for instance.

A. For additional information on this, you may want to watch our SNIA-ESF webcast, “Life of a Storage Packet (Walk).”

Q. What’s the relationship between disk controller and volume manager?

A. Following on the last question, a disk controller does exactly what it sounds like – it controls disks. A RAID controller, likewise, controls disks and the read/write mechanisms. Some RAID controllers have additional software abstraction capabilities that can act as a volume manager as well.

We hope these answers clear things up a bit more. As you know, our “Everything You Wanted To Know About Storage, But Were Too Proud To Ask” is a series, since this Chartreuse event, we’ve done “Part Mauve – The Architecture Pod” where we explained channel vs. bus, control plane vs. data plane and fabric vs. network. Check it out on-demand and follow us on Twitter @SNIAESF for announcements on upcoming webcasts.

 

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Everything You Want To Know About Storage Is On Again With Part Mauve – The Architecture Pod

J Metz

Oct 11, 2016

title of post
The first installment of our "colorful" Webcast series, "Everything You Wanted To Know about Storage But Were Too Proud To Ask – Part Chartreuse," covered the fundamental elements of storage systems. If you missed it, you can check it out on-demand. On November 1st, we'll be back at it, focusing on the network aspect of storage systems with "Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve." As with any technical field, it's too easy to dive into the jargon of the pieces and expect people to know exactly what you mean. Unfortunately, some of the terms may have alternative meanings in other areas of technology. In this Webcast, we look at some of those terms specifically and discuss them as they relate to storage networking systems. In particular, you'll find out what we mean when we talk about:
  • Channel versus Busses
  • Control Plane versus Data Plane
  • Fabric versus Network
Register now for Part Mauve of "Everything You Wanted To Know About Storage But Were Too Proud to Ask. For people who are familiar with data center technology, whether it be compute, programming, or even storage itself, some of these concepts may seem intuitive and obvious... until you start talking to people who are really into this stuff. This series of Webcasts will help be your Secret Decoder Ring to unlock the mysteries of what is going on when you hear these conversations. We hope to see you there! Update: If you missed the live event, it's now available  on-demand. You can also  download the webcast slides.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The Everything You Want To Know About Storage Is On Again With Part Mauve – The Architecture Pod

J Metz

Oct 11, 2016

title of post

The first installment of our “colorful” Webcast series, “Everything You Wanted To Know about Storage But Were Too Proud To Ask – Part Chartreuse,” covered the fundamental elements of storage systems. If you missed it, you can check it out on-demand. On November 1st, we’ll be back at it, focusing on the network aspect of storage systems with “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve.”

As with any technical field, it’s too easy to dive into the jargon of the pieces and expect people to know exactly what you mean. Unfortunately, some of the terms may have alternative meanings in other areas of technology. In this Webcast, we look at some of those terms specifically and discuss them as they relate to storage networking systems.

In particular, you’ll find out what we mean when we talk about:

  • Channel versus Busses
  • Control Plane versus Data Plane
  • Fabric versus Network

Register now for Part Mauve of “Everything You Wanted To Know About Storage But Were Too Proud to Ask.

For people who are familiar with data center technology, whether it be compute, programming, or even storage itself, some of these concepts may seem intuitive and obvious… until you start talking to people who are really into this stuff. This series of Webcasts will help be your Secret Decoder Ring to unlock the mysteries of what is going on when you hear these conversations. We hope to see you there!

 

 

 

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Everything You Wanted to Know about Storage, but were too Proud to Ask

J Metz

Jul 18, 2016

title of post
Many times we know things without even realizing it, or remembering how we came to know them. In technology, this often comes from direct, personal experience rather than some systematic process. In turn, this leads to "best practices" that come from tribal knowledge, rather than any inherent codified set of rules to follow. In the world of storage, for example, it's very tempting to simply think of component parts that can be swapped out interchangeably. Change out your spinning hard drives for solid state, for example, you can generally expect better performance. Change the way you connect to the storage device, get better performance... or do you? Storage is more holistic than many people realize, and as a result there are often unintended consequences for even the simplest of modifications. With the ‘hockey stick-like' growth in innovation over the past couple of years, many people have found themselves facing terms and concepts in storage that they feel they should have understood, but don't. These series of webcasts are designed to help you with those troublesome spots: everything you thought you should know about storage but were afraid to ask. Here, we're going to go all the way back to basics and define the terms so that people can understand what people are talking about in those discussions. Not only are we going to define the terms, but we're going to talk about terms that are impacted by those concepts once you start mixing and matching. For example, when we say that we have a "memory mapped" storage architecture, what does that mean? Can we have a memory mapped storage system at the other end of a network? If so, what protocol should we use – iSCSI? POSIX? NVMe over Fabrics? Would this be an idempotent system or an object-based storage system? Now, if that above paragraph doesn't send you into fits of laughter, then this series of webcasts is for you (hint: most of it was complete nonsense... but which part? Come watch to find out!). On September 7th, we will start with the very basics – The Naming of the Parts. We'll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we'll learn:
  • What an initiator is
  • What a target is
  • What a storage controller is
  • What a RAID is, and what a RAID controller is
  • What a Volume Manager is
  • What a Storage Stack is
Too proud to ask   With these fundamental parts, we'll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment. Future webcasts will discuss: Part Mauve - Architecture Pod:
  • Channel v. bus
  • Control plane v. data plane
  • Fabric v. network
Part Teal - Buffering Pod:
  • Buffering v. Queueing (with Queue Depth)
  • Flow Control
  • Ring Buffers
Part Rosé - iSCSI Pod:
  • iSCSI offload
  • TCP offload
  • Host-based iSCSI
Part Sepia - Getting-From-Here-To-There Pod:
  • Encapsulation v. Tuning
  • IOPS v. Latency v. Jitter
Part Vermillion - The What-if-Programming-and-Networking-Had-A-Baby Pod:
  • Storage APIs v. POSIX
  • Block v. File v. Object
  • Idempotent
  • Coherence v. Cache Coherence
  • Byte Addressable v. Logical Block Addressing
Part Turquoise - Where-Does-My-Data-Go Pod:
  • Volatile v. Non-Volatile v Persistent Memory
  • NVDIMM v. RAM v. DRAM v. SLC v. MLC v. TLC v. NAND v. 3D NAND v. Flash v SSDs v. NVMe
  • NVMe (the protocol)
Part Cyan - Storage Management:
  • Management Processes: Discovery, Provisioning and Configuration
  • Software-Defined Storage
  • Storage Management Standards: SMI-S, SNIA Swordfish
Part Aqua - Storage Controllers:
  • Storage Controllers 101
  • SCSI Controllers
  • Fibre Channel Controllers
  • NVMe Controllers
  • Networking SDN and Storage SDN Controllers
Part Taupe - Memory Pod:
  • Memory Mapping
  • Physical Region Page (PRP)
  • Scatter Gather Lists
  • Offset
Part Burgundy - Orphans Pod
  • Doorbells
  • Controller Memory Buffers
Of course, you may already be familiar with some, or all, of these concepts. If you are, then these webcasts aren't for you. However, if you're a seasoned professional in technology in another area (compute, networking, programming, etc.) and you want to brush up on some of the basics without judgment or expectations, this is the place for you. Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – please start with Part Chartreuse, "The Naming of the Parts." Update: This series is now available on-demand. You can also download the webcast slides. Happy viewing! Watch: Chartreuse - The Basics  Download: Webcast slides Watch: Mauve - The Architecture Pod Download: Webcast slides Watch: Teal - The Buffering Pod  Download: Webcast slides Watch: Rosé - This iSCSI Pod  Download: Webcast slides Watch: Sepia - Getting from Here to There Download: Webcast slides Watch: Vermillion - What if Programming and Networking Had a Storage Baby  Download: Webcast slides Watch: Turquoise - Where Does My Data Go?  Download: Webcast slides Watch: Cyan - Storage Management Download: Webcast slides Watch: Aqua - Storage Controllers Download: Webcast slides   Watch: Taupe - Memory    Download: Webcast slides  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to J Metz