Abstract
This talk presents an analysis of multiple storage technologies and architectures that can be used for deep learning processing acceleration. DL is often related to computing and memory bandwidth bottlenecks, where GPUs can bring a solution. Smart solutions can be designed by re-thinking the overall system architecture and using current and innovative storage technologies: targeting a data movement reduction and leading to a computing efficiency increase. You will learn: -architecture: how to deal with NVMe, NVDIMM and NVMe-oF for computational storage -applications: benefits for deep learning (training and inference).