Present and Future Uses for Computational Storage and Persistent Memory: A Panel Discussion

Submitted by Anonymous (not verified) on

As CS and PMEM products have moved from concepts into reality there has been a similar migration in their application usages and value propositions. Join in as this group of experts debates why CS are PMEM are needed, the real-world problems they solve, the barriers still blocking the way, and the pots of gold over the horizon. Additionally, this lively panel will tussle over: CPUs vs. GPUs vs. DPUs, cloud vs. edge adoption, scale up vs. scale out, legacy vs. new interconnect, as well as your live audience questions!

Security Impacts to a Changing Ecosystem - a Panel Discussion

Submitted by Anonymous (not verified) on

Computational Storage may introduce new attack surfaces for hackers.  The threats themselves may be familiar, but can now be potentially deployed in the storage device.  Vendors and end users need to be looking hard at security to ensure secure deployments.  This session will explore supply chain issues and implications, the state of specifications and standards related to this technology, and potential security opportunities of the technology. Panelists are: Eric Hibbard, Samsung Semiconductor; Walt Hubis, Micron Corporation; David McIntyre, Samsung Corporation.

An NVMe-based SQL Query Engine for accelerating Big-Data Applications

Submitted by Anonymous (not verified) on

NVMe-based Computational Storage offers the key benefits of computational storage and aligns with a popular and open standard. Using Computational Storage we can build systems that have improved performance and higher efficiency when compared to legacy computer architectures. As Computational Storage becomes more pervasive we are seeing a move from basic Computational Storage Functions (e.g., compression) to more complex functions. In this paper we present a NVMe-based SQL Query Engine based on our NoLoad Computational Storage Device.

What is the Carbon Footprint Benefit of Computational Storage?

Submitted by Anonymous (not verified) on

In this talk, a carbon footprint analysis methodology will be presented, explaining the different parameters that need to be considered, and the different outputs than need to be observed to really understand a carbon footprint analysis. An carbon footprint analysis example will be provided with a CS system benchmark.

Empowering Real-Time Decision Making for Large-Scale Datasets with SSD-like Economics 

Submitted by Anonymous (not verified) on

In the technology-driven world, we live in, the speed of data access and the real-time nature of data can make or break a business for data-driven organizations. Hence many organizations leverage In-Memory solutions for real-time decision-making to capture trends and competitiveness. However, as the data set extends beyond memory footprints, many enterprises take performance & cost-compromised approach by limiting critical datasets in-memory for real-time and perceived less critical datasets in SSD storage, resulting in delayed business decisions and missed business opportunities.

Methods to Evaluate or Identify Suitable Storage for IoT/AI Boards

Submitted by Anonymous (not verified) on

Famous Boards like Raspberry uses MicorSD card that are of low cost for booting and storing data. This paper explains methods to get benchmarking results and lists important parameters and their values that helps to evaluate which cards needs to be chosen to implement any Applications on Raspberry board. This paper also explain method to emulate one of the AI application that helps to find out life time of MicroSD card for various workloads.

SSDs That Think

Submitted by Anonymous (not verified) on

With the amount of generated data growing steeply, obviously not all of it can be saved. An even smaller portion of it actually gets analyzed and leveraged. In addition, many expensive host cycles need to be invested in pre-processing before the data is actually used for computation. The benefits of computational storage to offline process the data that is stored locally on the drive, at rest, and generate compact and relevant representation of it, can enable more efficient host processing.

HPC For Science Based Motivations for Computation Near Storage

Submitted by Anonymous (not verified) on

Scientific data is mostly stored in linear bytes in files but it almost always has hidden structure that resembles records with keys and  values, often times in multiple dimensions.  Further, the bandwidths required to service HPC simulation workloads will soon approach tens of terabytes/sec with single data files surpassing a petabyte and single sets of data from a campaign approaching 100 petabytes.  Multiple tasks from distributed analytical/indexing functions to data management tasks like compression, erasure, encoding, dedup, are all potentially more efficiently and econ

Subscribe to PM+CS Summit