Abstract
Previous generations of the Pure Storage FlashArray used InfiniBand RDMA as a cluster interconnect between storage controllers in a system. The current generation replaces this with PCI Express Non-Transparent Bridging. We will describe how we preserved the key attributes of high throughput, low latency, CPU offloaded data movement and kernel bypass while moving the interconnect from a discrete IB adapter to a CPU-integrated PCIe port using new technologies including Linux vfio and PCIe NTB.
Learning Objectives
Key attributes of an RDMA transport
Description of PCIe NTB
Implementation of RDMA on PCIe NTB