Abstract
With the amount of generated data growing steeply, obviously not all of it can be saved. An even smaller portion of it actually gets analyzed and leveraged. In addition, many expensive host cycles need to be invested in pre-processing before the data is actually used for computation. The benefits of computational storage to offline process the data that is stored locally on the drive, at rest, and generate compact and relevant representation of it, can enable more efficient host processing. While inline processing of data on its commute to/from the drive can significantly improve overall performance, computational storage offline processing is an enabler for more data to be uncovered and more use cases to emerge. This presentation will include concrete examples of AI inferencing done at the storage device in the context of the needed host computation.