The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency…[more]

The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics...[more]

John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco
In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage...[more]

Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.
In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet: RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems. The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions. Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate.

Alex McDonald, NetApp
The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting.
Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.

What new security requirements apply to Persistent Memory (PM)? While many existing security practices such as access control, encryption, multi-tenancy and key management apply to persistent memory, new security threats may result from the differences between PM and storage technologies. The SNIA PM security threat model provides a starting place for exposing system behavior, protocol and implementation security gaps that are specific to PM. This in turn motivates industry groups such as TCG and JEDEC to standardize methods of completing the PM security solution space.

Eric Lakin, University of Michigan; Michelle Tidwell, IBM; Alex McDonald, NetApp
We’re increasingly in a multi-cloud environment, with potentially multiple private, public and hybrid cloud implementations in support of a single enterprise. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely from one cloud to another. That means data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. In this webcast, we’re going to explore some of these data movement and mobility issues with real-world examples from the University of Michigan. Register now for discussions on:
- How do we secure data both at-rest and in-transit?
- Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
- How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
- Should the application of the data influence how (and even if) we move the data?
- How can data in the cloud be leveraged for multiple use cases?

The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.
One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.
Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.
That leads to several questions about FCoE, iSCSI and iSER:
- If we can run various network storage protocols over Ethernet, what differentiates them?
- What are the advantages and disadvantages of FCoE, iSCSI and iSER?
- How are they structured?
- What software and hardware do they require?
- How are they implemented, configured and managed?
- Do they perform differently?
- What do you need to do to take advantage of them in the data center?
- What are the best use cases for each?
Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

Are you a control freak? Have you ever wondered what was the difference between a storage controller, a RAID controller, a PCIe Controller, or a metadata controller? What about an NVMe controller? Aren’t they all the same thing?
In part Aqua of the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series, we’re going to be taking an unusual step of focusing on a term that is used constantly, but often has different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. From the outside looking in, it may be easy to get confused. You can even have controllers managing other controllers!
Here we’ll be revisiting some of the pieces we talked about in Part Chartreuse [https://www.brighttalk.com/webcast/663/215131], but with a bit more focus on the variety we have to play with:
- What do we mean when we say “controller?”
- How are the systems being managed different?
- How are controllers used in various storage entities: drives, SSDs, storage networks, software-defined
- How do controller systems work, and what are the trade-offs?
- How do storage controllers protect against Spectre and Meltdown?
Join us to learn more about the workhorse behind your favorite storage systems.

You won’t want to miss the opportunity to hear leading data storage experts provide their insights on prominent technologies that are shaping the market. With the exponential rise in demand for high capacity and secured storage systems, it’s critical to understand the key factors influencing adoption and where the highest growth is expected. From SSDs and HDDs to storage interfaces and NAND devices, get the latest information you need to shape key strategic directions and remain competitive.
