Future of Persistent Memory, DRAM, and SSD Form Factors Aligned with New System Architectures

Submitted by Anonymous (not verified) on

The options for memory expansion and system acceleration are growing and starting to align with emergent serial and fabric attached architectures. Application responsiveness and system performance are key values for end users. Modern data workloads require real-time processing of large data-sets resident in main memory But, memory capacity has not scaled as compared to the number of CPU cores available in modern day servers.

State of the Computational Storage Market - A Supplier's View

Submitted by Anonymous (not verified) on

When doing research on the topic of Computational Storage, there is a lot of content that continues to surface from both vendors and now more importantly the editors, authors of major outlets like EnterpriseAI and TechTarget. Even the Analysts, like IDC, 451 Research and most recently Gartner are getting into the mix.

Four Top Use Cases for Big Memory Today and Tomorrow

Submitted by Anonymous (not verified) on

Big memory computing consists of DRAM, persistent memory, and big memory software like Memory Machine from MemVerge, all working together in a new class of software-defined memory. During the first year since MemVerge unveiled big memory in May of 2020, four strong use cases emerged: 1) in-memory databases, 2) cloud infrastructure, 3) animation and VFX, and 4) genomics. All share a critical need for composable memory with capacity, performance, availability, mobility, and security that can be tailored for the app.

Practical Computational Storage: Performance, Value, and Limitations

Submitted by Anonymous (not verified) on

Ongoing standardization efforts for computational storage promise to make the offload of data intensive operations to near-storage compute available across a wide variety of storage devices and platforms. However, many technologists, storage architects and data center managers are still unclear on whether computational storage offers real benefits to practitioners. In this talk we describe Los Alamos National Laboratory’s ongoing efforts to deploy computational storage into the HPC data center.

Persistent Memory on CXL

Submitted by Anonymous (not verified) on

The emerging Compute Express Link (CXL)  includes support for memory devices and provides a natural place to attach persistent memory (pmem).  In this talk, Andy describes the additions made to the CXL 2.0 specification in order to support pmem.  Device identification, configuration of interleave sets and namespaces, event reporting, and the programming model will be covered.   Andy will describe how multiple standards like CXL, ACPI, and UEFI all come together to continue to provide the standard SNIA NVM Programming Model for pmem.

Why Distributed AI Needs Computational Storage

Submitted by Anonymous (not verified) on

Artificial Intelligence is increasingly being used in every type of business and industry vertical including finance, telco, healthcare, manufacturing, automotive, and retail. The nature of AI is becoming distributed across multiple nodes in the data center but also across the cloud and edge. Traditional local and networked storage solutions often struggle to meet the needs of AI running on many different types of devices in various locations. Computational storage can solve the challenge of data locality for distributed AI.

Beyond Zoned Namespace - What Do Applications Want?

Submitted by Anonymous (not verified) on

When data processing engines are using more and more log semantics, it’s natural to extend Zoned Namespace to provide a native log interface by introduce variable size, byte append-able, named zones. Using the newly introduced ZNSNLOG interface, not only the storage device enables less write amplification/more log write performance, but more flexible and robust naming service. Given the trends towards a compute-storage disaggregation paradigm and more capable computational storage, ZNSNLOG extension enables more opportunities for near data processing.

A New Path to Better Data Movement within System Memory, Computational Memory with SDXI

Submitted by Anonymous (not verified) on

Today, computation associated with data in use occurs in system memory. As the system memory envelope expands to includes different tiers and classes of memory helped by memory fabrics, data in use envelope increases.  In many usage models, moving data to where the computation occurs is important. In other usage models, data copies are needed for compute scaling. Data movement is a resource intensive operation used by a variety of software stacks and interfaces.

Subscribe to PM+CS Summit