Keynote: Realizing the Next Generation of Exabyte-scale Persistent Memory Centric Architectures and Memory Fabrics

Submitted by Anonymous (not verified) on

In the last five years, the increasing volume, velocity and variety of data generated and consumed by Big Data and Fast Data applications has driven an aggressive pursuit for the next generation of emerging non-volatile memories, particularly in the area of persistent memory. At component level, this memory must be byte-addressable and non-volatile, deliver latency comparable to DRAM, but have density and cost that falls somewhere between DRAM and NAND flash.

Key Value Storage Standardization Progress

Submitted by Anonymous (not verified) on

NVMe KV is a proposal for a new command structure to access data on an NVMe controller that is being developed within the NVMe technical working group. This proposed command set provides a key and a value to store data on the Non Volatile media and provides a key to retrieve data stored on the media. In addition to the work on the NVMe specification, the SNIA is also working on a KeyValue API. This presentation will describe the standardization efforts going on in both the NVMe working group and SNIA. Learning Objectives: 1. What is Key Value Storage 2.

iSER as Accelerator for Software Defined Storage

Submitted by Anonymous (not verified) on

Storage SANs are currently dominated by FC (Fibre Channel) technologies due to its effectiveness in providing high bandwidth, low latency and high throughput. However with the advent and popularity of low latency all-flash arrays and need of cloud centric data centers which are standardizing on Ethernet, iSER is getting more visibility as next generation high speed interconnect. Since iSER uses standard Ethernet interface it is also suitable to new data center paradigms that use Hyper-Converged Infrastructure.

Introducing Fibre Channel NVMe

Submitted by Anonymous (not verified) on

NVMe is one of the most interesting new developments to happen to storage in the past several years, and NVMe over Fabrics extends these capabilities over a Storage Area Network. Given that 80% of all existing Flash storage solutions deployed are interconnected with Fibre Channel (FC), many questions have arisen about what it is, how it works, and why someone might want to consider using Fibre Channel for NVMe-based solutions. In this tutorial, the speaker will address some of these fundamental questions: 1. How does Fibre Channel and NVMe work together? 2.

Integrity of In-memory Data Mirroring in Distributed Systems

Submitted by Anonymous (not verified) on

Data in memory could be in a modified state than its on-disk copy. Also, unlike the on-disk copy, the in-memory data might not be checksummed, replicated or backed-up, every time it is modified. So the data must be checksummed before mirroring to avoid network corruptions. But checksumming the data in the application has other overheads: It must handle networking functionalities like retransmission, congestion, etc. Secondly, if it delays the validation of mirrored data, it might be difficult to recover the correct state of the system.

Integrating storage systems into Active Directory with winbind

Submitted by Anonymous (not verified) on

Most environments use Active Directory as their primary authentication and authorization source. Users and groups are stored there. Any storage system must authenticate and authorize users in some way. Samba's winbind provides a solution to seamlessly integrate with Active Directory using the same mechanisms a native Windows client uses. It provides an API to authenticate users and retrieve authorization information like gorup memberships of authenticated users.

Improving Azure File Service: Adding New Wings to a Plane in Mid-flight

Submitted by Anonymous (not verified) on

For the past three years Microsoft Azure has provided a completely managed SMB3 file server in the cloud. Leveraging the Continuous Availability features of SMB3, the customer experience is an always available and reliable file share. As we push to add the most demanded new features, the complexity and caution required to achieve this in a transparent and safe way presents fundamentally new kinds of challenges due to the scale of Azure’s public cloud.

Implementing SMB Direct for the Linux SMB Client

Submitted by Anonymous (not verified) on

Remote file systems involve a large amount of client-server round trip packet exchanges for I/O requests, the network latency and the efficiency of transport layer is important to the performance. RDMA can provide lower latency and at the same time allow the file system control how the send/receive buffers are allocated and reduce overhead of I/O buffer copies. When implemented in the protocol, it is also possible for the file system to do a more precise flow control to manage data congestion.

Implementing Persistent Handles in Samba

Submitted by Anonymous (not verified) on

This talk will present a new internal Samba database abstraction backend that combines the performance and durability properties of the existing volatile and persistent database models and an API that allows choosing the database model on a per-record basis.

The talk will give an overview of the current implementation status and a demo of Persistent Handles in action in Samba.

Learning Objectives:
1. The challenges implementing Persistent Handles in Samba
2. What the proposed design looks like
3. The current implementation status

Implementing NVMe Over Fabrics

Submitted by Anonymous (not verified) on

NVMe is gaining momentum as the standard high performance disk interface that eliminates various bottlenecks in accessing PCIe SSD devices. NVMe over Fabrics extends NVMe beyond the confines of a PCIe fabric by utilizing a low latency network interconnect such as iWARP RDMA/Ethernet to attach NVMe devices. iWARP is unique in its scalability and reach, practically eliminating constraints on the architecture, size and distance of a storage network.

Subscribe to Networked Storage