Abstract
Several kinds of non-volatile memory (NVM) technology in development aim at filling the wide gap between DRAM and NAND flash in terms of bandwidth, access latency, write endurance, retention time and raw bit error rate. The intermediate latency and error rate characteristics of these new memory technologies make them uncomfortable replacements for DRAM. Such a substitution will require significant changes to the CPU architecture and software stack to account for the much higher (and possibly stochastic) access latency. In addition, special attention is required for wear leveling, error correction and data protection at rest—problems which are not prominent with contemporary DRAM devices but are bread and butter of SSD storage devices.
For these reasons, at HGST Research we have been working diligently to find out where is there room in the existing hardware/software ecosystem for emerging NVM technology when viewed as block storage rather than main memory. In this presentation I will show an update on our previously published results using prototype PCI Express-attached PCM SSDs and our custom device protocol, DC Express, as well as measurements of its latency and performance through a proper device driver using several different kinds of Linux kernel block layer architecture. A discussion of strategies for reducing latency spikes under Linux will follow.
Finally, I will show a preview of our latest work showing that PCM SSDs deployed in a networked environment offer end-to-end application performance comparable to remote DRAM when accessed via InfiniBand RDMA and using PCIe peer-to-peer communication.