OPI (Open Programmable Infrastucture)

Submitted by Anonymous (not verified) on

A new class of cloud and datacenter infrastructure is emerging into the marketplace. This new infrastructure element, often referred as Data Processing Unit (DPU) or Infrastructure Processing Unit (IPU), takes the form of a server hosted PCIe add-in card or on-board chip(s), containing one or more ASIC’s or FPGA's, usually anchored around a single powerful SoC device. The DPU/IPU-like devices have their roots in the evolution of SmartNIC devices but separate themselves from that legacy in several important ways.

Hardware Accelerated ZFS using Computational Storage

Submitted by Anonymous (not verified) on

As hardware layers in storage systems (such as network and storage devices) continue to increase in performance, it is vital that the IO software stack does not fall behind and become the bottleneck. Leveraging the capabilities of computational storage devices, such data processing units (DPUs), allows for the IO software stack to accelerate CPU and memory bandwidth constrained operations in order to fully take advantage of the storage hardware in the system.

Next-Generation Storage will be built with DPUs instead of CPUs

Submitted by Anonymous (not verified) on

DPUs (data processing units) are an exciting new category of processor that complement CPUs and GPUs inside data centers. DPUs are fast at data-centric tasks, while they retain the full programmability of CPUs. DPUs have typically been used to offload networking and security functions from compute servers, but have not been used to build storage systems until recently. In this talk, we will briefly introduce DPUs, then highlight the use of DPUs to build storage systems. This is an emerging new use case for DPUs.

Storage Virtualization and HW-agnostic Acceleration using IPDK and xPU

Submitted by Anonymous (not verified) on

Local disk emulation using domain-specific hardware poses a great opportunity for innovation in the storage domain. Standard host-side drivers like NVMe or virtio-blk and legacy applications can be enabled to access disaggregated storage at scale using state-of-art protocols like NVMe-oTCP while increasing performance through offload of storage services to the hardware (SmartNIC/DPU/IPU/xPU).

Accelerating FaaS/container Image Construction via IPU

Submitted by Anonymous (not verified) on

NOTE: this paper was developed by Ziye Yang, a Staff Software Engineer at Intel and is being presented by colleague Yadong Li, a Principal Engineer in the Ethernet Products Group at Intel. In many usage cases, FaaS applications usually run or deployed in container/Virtual machine environment for isolation purpose.

Disaggregated NVMe/TCP Storage Using an Infrastructure Processing Unit (IPU)

Submitted by Anonymous (not verified) on

In this presentation, we will describe a complete end-to-end Software Defined Storage (SDS) solution for cloud data centers using Infrastructure Processing Units (IPUs). IPUs provide a high performance NVMe interface to host, abstracting away the details of networked storage and enabling storage disaggregation and bare-metal hosting. NVMe/TCP is a high performance protocol widely deployed because of its ease of deployment and better scalability in large scale-out networks.

DPU as a Storage Initiator for bare metal performance and virtualization

Submitted by Anonymous (not verified) on

With the NVMe/TCP and disaggregated storage gaining rapid market adoption, it is more clear than ever that Storage Initiator (SI) needs to be more performant and efficient to provide applications with DAS (Direct Attached Storage) like performance and latency. At the same time, there is a growing need from virtualized applications for secure transport and storage performance near bare metal. The Host CPU can save a significant number of cycles by offloading the NVMe/TCP to a DPU based Storage Initiator (SI).

Is storage orchestration a headache? Try infrastructure programming

Submitted by Anonymous (not verified) on

Connecting remote storage to compute node requires adding and configuring software stacks for “storage over network” in the compute node. The storage software consumes significant number of cores on the compute node. The complexity and workload can be moved to Infrastructure Programming Unit (IPU) devices. IPU cores enable developing flexible software that can further be accelerated using hardware offloads for storage workloads. The session will discuss target agnostic frameworks and storage use cases which may be easier to deliver with devices like IPU.

SPDK and Infrastructure Offload

Submitted by Anonymous (not verified) on

Infrastructure offload based around NVMe-oF can deliver performance of direct attached storage with the benefits and composability of shared storage. Storage Performance Development Kit (SPDK) is a set of drivers and libraries uniquely positioned to demonstrate how projects like Infrastructure Programmer Development Kit (IPDK) can provide vendor agnostic high performance and scalable framework. This session will discuss how the SPDK NVMe-oF target has evolved to enable infrastructure offload.

Subscribe to Data Processing Units