Persistent Memory + Enterprise-Class Data Services = Big Memory

webinar

Author(s)/Presenter(s):

Charles Fan

Library Content Type

Presentation

Library Release Date

Focus Areas

Abstract

Data-centric applications such as AI/ML, IoT, Analytics and High Performance Computing (HPC) need to process petabytes of data with nanosecond latency. This is beyond the current capabilities of in-memory architectures because of DRAM’s high cost, limited capacity and lack of persistence. As a result, the growth of in-memory computing had been throttled as DRAM is relegated to only the most performance-critical workloads. In response, a new category of Big Memory Computing has emerged to expand the market for memory-centric computing. Big Memory Computing is where the new normal is data-centric applications living in byte-addressable, and much lower cost, persistent memory. Big Memory consists of a foundation of DRAM and persistent memory media plus a memory virtualization layer. The virtualization layer allows memory to scale-out massively in a cluster to form memory lakes, and is protected by new memory data services that provide snapshots, replication and lightning fast recovery. The market is poised to take off with IDC forecasting revenue for persistent memory to grow at an explosive compound annual growth rate of 248% from 2019 to 2023.

Learning Objectives

Session participants will learn the concept of Big Memory Computing, including its origins, purpose and the pain points it helps solve, and how it leverages Storage Class Memory to revolutionize the data center to render traditional storage architecture obsolete.,Session participants will learn of Big Memory Computing’s use cases for industries such as financial services.,Session participants will learn about the data service technology MemVerge delivers to support Big Memory Computing.