On Demand Webinars

Webinars
10:00 am PT / 1:00 pm ET

Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization.

SNIA's Smart Data Accelerator Interface (SDXI) Technical Work Group (TWG) is at the forefront of standardizing this. The SDXI TWG is designing an industry-open standard for a memory-to-memory data movement and acceleration interface that is – Extensible, Forward-compatible, and Independent of I/O interconnect technology. A candidate for the v1.0 SNIA SDXI standard is now in review.

Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.

In this webcast, we will:

  • Introduce SDXI and CXL
  • Discuss data movement needs in a CXL ecosystem
  • Cover SDXI advantages in a CXL interconnect

Download PDF

Read Q&A Blog

What’s in a Name? Memory Semantics and Data Movement with CXL™ and SDXI
9:00 am PT / 12:00 noon EDT

At the 2022 Open Compute Global Summit, OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how the family of EDSFF form factors solves real data challenges. In this webcast, SNIA SSD SIG co-chairs Cameron Brett of KIOXIA and Jonmichael Hands of Chia Network explain how having a flexible and scalable family of form factors allows for optimization for different use cases, different media types on SSDs, scalable performance, and improved data center TCO. They'll highlight the latest SNIA specifications that support these form factors, provide an overview of platforms that are EDSFF-enabled, and discuss the future for new product and application introductions.

Download PDF

EDSFF Taking Center Stage in the Data Center
11:00 am PT / 2:00 pm ET

Kubernetes platforms offer a unique cloud-like experience — all the flexibility, elasticity, and ease of use — on premises, in a private or public cloud, even at the edge. The ease and flexibility of turning on services when you want them, turning them off when you don’t, is an enticing prospect for developers as well as application deployment teams, but it has not been without its challenges.
 
Our Kubernetes panel of experts will debate the challenges and how to address them, discussing:

  • So how are all these trends coming together?
  • Is cloud repatriation really a thing?
  • How are traditional hardware vendors reinventing themselves to compete?
  • Where does the data live?
  • How is the data accessed?
  • What workloads are emerging? 

Download PDF

Read Q&A Blog

Kubernetes Trials & Tribulations: Cloud, Data Center, Edge
10:00 am PT / 1:00 pm ET

With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks are emerging, consolidating and vying for the preferred programming software stack that best integrates the application layer with these underlying processing units.

This webcast will provide an overview of programming frameworks that support (1) GPUs (CUDA, SYCL, OpenCL, oneAPI), (2) xPUs (DASH, DOCA, OPI, IPDK), and (3) Computational Storage (SNIA computational storage API, NVMe TP4091 and FPGA programming shells).

We will discuss strengths and challenges and market adoption across these programming frameworks as we untangle the alphabet soup of new frameworks that include:

  • AI/ML: OpenCL, CUDA, SYCL, oneAPI
  • xPU: DOCA, OPI, DASH, IPDK
  • Core data path frameworks: SPDK, DPDK
  • Computational Storage: SNIA Standard 0.8 (in public review), TP4091

Read Q&A Blog

Download PDF

You’ve Been Framed! xPU, GPU & Computational Storage Programming Frameworks
10:00 am PT / 1:00 pm ET

Wide-spread adoption of Kubernetes over the last several years has been remarkable and Kubernetes is now recognized as the most popular orchestration tool for containerized workloads. As applications and workflows in Kubernetes continue to evolve, so must the platform and storage.

So, where are we today, and where are we going? Find out in this “15 Minutes in the Cloud” session, where we’ll discuss:

  • Persistence - From ephemeral to persistent - what has putting persistence in the mix done to applications?
  • Business Continuity - What’s needed for business continuity, backup & recovery and DR?
  • Deployment - Kubernetes delivered as a service, in the cloud, on-premises, data center and edge. How is that different in each case?
  • Performance/Scalability – How do you scale and still ensure performance?
  • Trends – What are the business drivers and what does the future hold?

Download PDF

15 Minutes in the Cloud: Kubernetes is Evolving, Are You?
10:00 am PT / 1:00 pm ET

Our 1st and 2nd webcasts in this xPU series explained what xPUs are, how they work, and what they can do. In this 3rd webcast, we will dive deeper into next steps for xPU deployment and solutions, discussing:

When to deploy

  • Pros and cons of dedicated accelerator chips versus running everything on the CPU
  • xPU use cases across hybrid, multi-cloud and edge environments
  • Cost and power considerations

Where to deploy

  • Deployment operating models: Edge, Core Data Center, CoLo, Public Cloud
  • System location: In the server, with the storage, on the network, or in all those locations?

How to deploy

  • Mapping workloads to hyperconverged and disaggregated infrastructure
  • Integrating xPUs into workload flows
  • Applying offload and acceleration elements within an optimized solution

Download PDF

xPU Deployment and Solutions Deep Dive
10:00 am PT / 1:00 pm ET

Organizations are adopting containers at an increasingly rapid rate. In fact, there are few organizations that haven’t implemented containers in their environment today.
 
Storage implications for Kubernetes will be the topic of this live webcast where storage experts from SNIA and Kubernetes experts from the Cloud Native Computing Foundation (CNCF) will discuss:

  • Key storage attributes of cloud native storage for Kubernetes
  • How do we use cloud native storage in Kubernetes environments?
  • Workloads and real-world use cases

 This webcast will help you better understand and address storage and persistent data challenges in a Kubernetes environment.

Download PDF

Read Q&A Blog

Kubernetes is Everywhere – What About Cloud Native Storage?
10:00 am PT / 1:00 pm ET

Edge is the new frontier of compute and data in today’s world, driven by the explosive growth of mobile devices, work from home, digital video, smart cities, and connected cars. An increasing percentage of data is generated and processed at the edge of the network. With this trend comes the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center. 

  • The increasing need to do more at the edge across compute, storage and networking
  • The rise of intelligent edge locations
  • Different solutions that provide faster processing or data movement at the edge
  • How computational storage can speed up data processing and transmission at the edge
  • Security considerations for edge processing

We look forward to having you join us to cover all this and more. We promise to keep you on the edge of your virtual seat!

Download PDF

Storage Life on the Edge: Accelerated Performance Strategies
9:00 am PT / 12:00 noon ET

What do you think is a more secure way of securely removing data from a hard drive - putting it through a shredder, or doing an instant secure erase? The answer might surprise you! Companies go to great lengths to secure their data and prevent confidential information from being made available to others. When a company is done using its ICT equipment, including the storage device, it is important to render the data inaccessible. Sanitization is a process or method to render access to target data on storage media infeasible for a given level of effort. SSDs and HDDs have various security features that make this sanitization quick, secure, and verifiable.

In this webcast, we will go over the different types of sanitization defined in the new IEEE P2883 Specification for Sanitization of Storage and cover easy ways to perform “Clear”, “Purge,” and “Destruct in mainstream storage interfaces like SATA, SAS, and NVMe. We discuss recommendations for the verification of sanitization to ensure that devices are meeting stringent requirements and explain how the purge technique for media sanitization can be quick, secure, reliable, and verifiable - and most importantly keeps the device in one piece.

Download PDF

Is the Data Really Gone? A Primer on the Sanitization of Storage Devices
11:00 am PT / 2:00 pm ET

As covered in our first webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU.   

This second webcast in this series will cover a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on:

  •  Network Offloads 
  •  Security Offloads
  • Compute Offloads 
  • Storage Offloads

Download PDF

Read Q&A Blog

xPU Accelerator Offload Functions