Ethernet Storage Fabrics: Using RDMA with Fast NVMe-oF Storage to Reduce latency and Improve Efficiency

Submitted by Anonymous (not verified) on

NVMe-oF is a new memory standard that allows the latest storage class memories to be extended and shared. Review of the architecture of PCI-Express based Flash and SCM storage and how the hardware/software interface is optimized for parallelization and minimization of context switches and interrupts. Explication of how this interface can be extended over an Ethernet fabric using RDMA and explanation of the technical and business advantages of this approach over using legacy SCSI based SAN technologies such as Fibre Channel.

Enhancing NVNeOF capabilities using storage abstraction

Submitted by Anonymous (not verified) on

NVMe protocol is optimized for NAND media, providing high performance in comparison with legacy protocols, such as SAS and SATA. Furthermore, it replaces the conventional volumes with namespaces and subsystems. NVMe-oF protocol enables sharing the NVMe resources within the rack and across racks in the datacenter, while maintaining the benefits of NVMe. In our talk, we will describe how NVMe SW abstraction further increases the potential of NVMe-oF. The abstraction enhances the ability to share storage resources.

Enabling Remote Persistent Memory

Submitted by Anonymous (not verified) on

This session will provide an overview of what is required to extend persistent memory across a fabric. Learn how control plane and resource management software need to change to support persistent memory. Understand new fabrics capabilities that support remote persistent memory, as well as the ongoing standardization efforts to support it. What are the tradeoffs between these technologies, and how can applications and operating systems take better take advantage of persistence?

Enabling Remote Access to Persistent Memory on an IO Subsystem Using NVM Express and RDMA

Submitted by Anonymous (not verified) on

NVM Express is predominately a block based protocol where data is transferred to/from host memory using DMA engines inside the PCIe SSD. However since NVMe 1.2 there exists a memory access method called a Controller Memory Buffer which can be thought of as a PCIe BAR managed by the NVMe driver. Also, the NVMe over Fabrics standard was released this year that extended NVMe over RDMA transports. In this paper we look at the performance of the CMB access methodology over RDMA networks.

Early Developer Experiences Extending Windows Nano Server with Enterprise Fibre Channel Software Target

Submitted by Anonymous (not verified) on

This presentation will cover early developer experiences using the new Windows 2016 Nano Server as an embedded platform OS for Enterprise Storage Appliances. We will briefly review available built-in storage technologies of this OS configuration. Experiences with the different development and test methodologies required for this new OS will be discussed. Finally, a specific storage application development case of extending Nano Server by adding a Fibre Channel SCSI Target will be covered in detail including preliminary performance results.

Development Techniques and Tips for Maximizing NVMe Performance

Submitted by Anonymous (not verified) on

The NVMe architecture is very different from any previous storage interface and methods to achieve the highest performance are still being developed. Attend this session for in depth details on how to maximize the highest performance from your NVMe software stack. Sections on hardware architecture, firmware design and establishing performance priorities will be discussed.

Designing High-Performance Non-Volatile Memory-aware RDMA Communication Protocols for Big Data Processing

Submitted by Anonymous (not verified) on

Recent studies have shown that the performance of in-memory computing and storage systems can be significantly improved by leveraging the high-performance networking technologies and interconnects. Most of these studies are based on low latency and CPU bypass zero-copy communication over volatile memory (i.e., DRAM). On the other hand, emerging persistent memory technologies that can offer byte-addressability persistence, along with DRAM-like performance, are providing us with opportunities to build novel high-performance in-memory sub-systems for data-intensive applications.

Delivering Scalable Distributed Block Storage using NVMe over Fabrics (NVMe-oF)

Submitted by Anonymous (not verified) on

NVMe and NVMe over Fabrics (NVMe-oF) protocols provide a highly efficient access to flash storage inside a server and over the network respectively. Current generation of distributed storage software stacks use proprietary protocols which are sub-optimal to deliver end to end low latency. Moreover it increases operational complexity to manage NVMe-oF managed flash storage and distributed flash storage in private cloud infrastructure.

Hyperconverged Infrastructures – What They Can Do and Where They’re Going

Submitted by Anonymous (not verified) on

Hyperconverged infrastructures combine compute, storage and networking components into a modular, scale-out platform that typically includes a hypervisor and some comprehensive management software. The technology is usually sold as self-contained appliance modules running on industry-standard server hardware with internal HDDs and SSDs. This capacity is abstracted and pooled into a shared resource for VMs running on each module or ‘node’ in the cluster.

Subscribe to Networked Storage