Enhancing RocksDB for SSD Endurance and Performance

Submitted by Anonymous (not verified) on

RocksDB is a popular storage engine used across a wide range of database platforms. There are challenges with RocksDB that adversely affect SSD health and endurance. To enable efficient and healthy use of SSDs by this engine, Toshiba Memory has re-architected RocksDB in a new and exciting way that solves the SSD health and endurance challenge while maintaining and even improving performance.

This session will introduce the challenges and changes made to the RocksDB source code and share the test results from these modifications.

NVM Express State of the Union – 2022 NVMe Annual Update

Submitted by Anonymous (not verified) on

NVM Express® (NVMe®) has become synonymous with high-performance storage with widespread adoption in client, cloud, and enterprise applications. The NVMe 2.0 family of specifications, released in June 2021, allow for faster and simpler development of NVMe solutions to support increasingly diverse environments, now including Hard Disk Drives (HDDs).

Optimal Performance Parameters for NVMe SSDs

Submitted by Anonymous (not verified) on

This presentation will discuss Optimal Performance Parameters (OPTPERF and OPTRPERF) found in NVM Express® (NVMe) Section 5.8.2 of NVM Command Set Specification. Every performance parameter (NPWG, NPWA, NPDG, NPDA, NOWS, NPRG, NPRA, NORS) has several choices on setting values by SSD manufacturers. In this presentation, the differences intended for some of the parameters (Ex: NPWG vs NOWS) will be highlighted. Sometimes the parameters may be set by either NAND or SSD Controller attributes.

NVMe/FC or NVMe/TCP an in-depth NVMe Packet & Flow Level Comparison Between the Two Transport Options

Submitted by Anonymous (not verified) on

All major storage vendors have started to offer NVMe/FC on their storage arrays. Almost all of them also have been offering an IP storage option via 25G ethernet iSCSI. Now with the introduction of NVMe/TCP that offers FC like services (discovery, notification, zoning) storage vendors can provide a software upgrade that would offer NVMe/TCP as an option. Customers have started to evaluate the NVMe transport options and are asking which infrastructure (Enet/FC) should they invest going forward?

NVMe-oF™ Boot

Submitted by Anonymous (not verified) on

Soon you will be able to boot across a network with attached computers using NVME over Fabrics. This capability is often called Boot from SAN. Currently, successful storage networking technologies such as Fibre Channel and iSCSI have standardized solutions that allow attached computer systems to boot from OS images stored on attached storage notes. The lack of this capability in NVMe-oF architecture presents a barrier to adoption.

libvfn: A Low-level NVMe Application and VFIO Driver Framework

Submitted by Anonymous (not verified) on

This talk presents the design and implementation of libvfn, a new open-source library for interacting with PCIe-based NVMe devices from user-space using VFIO. The core of the library is excessively low-level and aims to allow NVMe controller verification and testing teams to interact with devices at the register and queue level. While the library ships with a production ready NVMe driver with a high-level API, the library is designed to expose enough low-level VFIO functionality with which custom drivers can be implemented for any PCIe device as required by the application.

NVM Express State of the Union – 2023 NVMe® Annual Update

Submitted by Anonymous (not verified) on

NVM Express® (NVMe®) technology has become synonymous with high-performance storage seeing widespread adoption in client, cloud and enterprise applications. Since the release of the NVMe 2.0 family of specifications, the NVM Express organization has released a number of new features to allow for faster and simpler development of NVMe solutions in order to support increasingly diverse environments.

xNVMe and io_uring NVMe Passthrough – What does it Mean for the SPDK NVMe Driver?

Submitted by Anonymous (not verified) on

Almost 10 years ago, the SPDK userspace polled mode nvme driver showed performance and efficiency far surpassing what was capable with the Linux kernel. But in recent years, Linux has responded with io_uring and asynchronous NVMe passthrough interfaces. The xNVMe project has also helped storage projects and applications adapt to the ever-growing list of Linux storage interfaces. This talk will compare the strengths of the SPDK and Linux NVMe drivers, explain how xNVMe has enabled io_uring NVMe passthrough in SPDK, and share some early performance results.

Subscribe to NVM Express