SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
The SNIA Computational Storage TWG successfully released the Computational Storage Architecture and Programming Model v1.0 in August 2022 and the Computational Storage API v1.0 in October 2023. The CS TWG continues advancing Computational Storage with enhancements to both the Computational Storage Architecture and Programming Model and Computational Storage API.
The CS Architecture and Programming Model enhancements focus on security for multitenancy and sequencing of commands while the CS API enhancements provide clarifications to facilitate understanding. This presentation will describe these enhancements in detail and discuss the current state of the SNIA CS Architecture and Programming Model and the current state of the SNIA CS API.
Understand the enhancements to the SNIA CS TWG Computational Storage Architecture and Programming Model.
Understand the enhancements to the SNIA CS TWG Computational Storage API.
Understand how SNIA Computational Storage Architecture relates to NVMe Computational Storage.
Describe future directions that the Computational Storage TWG is heading, including SDXI and Computational Memory.
NVMe is developing a protocol for host addressable Subsystem Local Memory. CXL is intended to interface to a wide variety of devices including memory and accelerators. CXL is certainly gaining momentum as a preferred interface for disaggregated memory while CXL accelerator devices are still in the early stages of development. Computational Storage is one example of an accelerator that is well positioned to achieve additional benefits from CXL. Data residing in Subsystem Local Memory (SLM) on an NVMe device could potentially be accessed with the load/store interface of CXL and maintain coherency with host memory. In this talk, we will discuss use cases for CXL enabled SLM and bring the audience up to speed on the development of NVMe TP4184 that enables host addressability of SLM.
Data redundancy solutions (e.g., RAID or erasure code) are by nature compute intensive and consume high DRAM bandwidth in the write operation path. In particular, RAID solutions also contribute to CPU cache thrashing. With NVMe SSDs added to a system, read/write performance doubles with every PCIe generation, which shifts the performance bottlenecks to these data redundancy solutions, both hardware and software. To address this, KIOXIA is investigating a RAID offload technology orchestrated by an architecture that offloads compute and DRAM bandwidth to SSDs. RAID offload technology needs to be a scale out solution, so as the number of SSDs increase, performance can scale proportionally. It also needs to be extremely flexible for existing hardware and software RAID applications to meet the following criteria: maximize performance, address memory wall issues, optimize CPU core usage and DRAM bandwidth, RAID geometry agnostic, minimize TCO, and utilize the existing mature RAID stack and user interface.
The need for efficient data movement in computationally intensive applications has driven advancements in Peer-to-Peer Direct Memory Access (P2PDMA) technology. This need has been greatly exaggerated when discussing AI related applications (information gathering, training, and inference). P2PDMA facilitates direct data transfers between PCIe devices, bypassing the system memory and reducing latency and bandwidth bottlenecks. This framework has been around for many years but is finally starting to see some upstream traction. This presentation will delve into recent significant upgrades to the P2PDMA framework, explore test setups leveraging NVMe Computational Storage, and discuss real-world use cases demonstrating the benefits of P2PDMA and go into further detail of how this can be used in an ever-increasing AI world.
As industries grapple with an ever-expanding and complex sea of data, there is a paramount need for rethinking storage and analytics. For decades, the industry has tried to push analytics closer to the data to accelerate analytics and reduce costs – with varying degrees of success. Data warehouses can accelerate repeated queries on known relevant data, but require significant copying and pre-processing of data, and get very expensive as datasets grow. New “data lakehouse” approaches aim to directly query raw data objects in storage, but introduce new data formats, metadata catalogs, and additional compute resources. At AirMettle, we approach this differently – leveraging existing data formats, storage server infrastructure, and even commodity SSDs to dramatically accelerate analytics, lower costs, and reduce power consumption. AirMettle’s Analytical Data Platform is a trailblazing software-defined object storage service that accelerates Big Data analytics operations up to 100 times compared to conventional methods.
In this talk, we will discuss the nature of data, how this influenced the development of our Analytical Data Platform, and how its internal architecture stores and processes semi-structured data. Use cases and performance results will be showcased, bearing testimony to the groundbreaking speed and agility that the platform brings to the analytics landscape. Furthermore, we will share insights into our vision and progress in delegating processing to commodity storage devices.
The SNIA Computational Storage TWG successfully released the Computational Storage Architecture and Programming Model v1.0 in August 2022 and the Computational Storage API v1.0 in October 2023. The CS TWG continues advancing Computational Storage with enhancements to both the Computational Storage Architecture and Programming Model and Computational Storage API.
The CS Architecture and Programming Model enhancements focus on security for multitenancy and sequencing of commands while the CS API enhancements provide clarifications to facilitate understanding. This presentation will describe these enhancements in detail and discuss the current state of the SNIA CS Architecture and Programming Model and the current state of the SNIA CS API.