SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Today’s SSDs are reaching extreme capacities with some coming close to petabytes of storage capacity available to Hosts. However, the SSDs are commonly using embedded computing resources and embedded DRAM controllers to provide access to this massive quantity of storage. Some of the largest drives are leveraging an increased Indirection Unit (IU) size to extend the embedded resources of the drive.
This presentation will define an IU, and it will also explain how the embedded environment with IUs creates the rule of thumb for 1:1000 DRAM to NAND ratio. We’ll correct this rule of thumb together, and we will explore its limitations in the embedded environment. The presentation will continue to review some of the primary architectural options available to extend the limits of these embedded SSD environments. For example, an SSD might be split into several sub-domains. Alternatively, some DRAM selections and interaction models might be utilized. By the end of the presentation, attendees will know everything they need to start their career as an SSD architect for DRAM interactions. Most importantly attendees will understand why some changes are coming for these extreme capacity SSDs and some suggested SW changes to facilitate large IU drives in their storage stack.
Understanding DRAM impacts on SSDs and why these force increasing IU size
Trade offs available to solving increased IU sizes
Suggested host SW changes to facilitate increased IU sizes
As the need for PCI Express® (PCIe®) cabling has grown, PCI-SIG® has kept pace with the industry demand for high-speed cabling solutions in applications like disaggregated servers, data center networking, storage, AI, automotive and more. In May 2024, PCI-SIG released the CopprLink™ Internal and External specifications for PCIe 5.0 and 6.0 technology, providing internal and external cabling solutions at 32.0 and 64.0 GT/s.
The release of CopprLink Internal and External specifications is part of PCI-SIG’s mission to offer PCIe cabling solutions that meet a wide variety of evolving industry needs. CopprLink internal cabling specification leverages the SNIA SFF-TA-1016 connector form factor for target applications like storage, data center compute nodes. CopprLink External specification leverages the SNIA SFF-TA-1032 connector form factor, targeting applications like storage and data center AI/ML use cases.
PCI-SIG is also developing a new external cabling specification, based on the well-established SNIA SFF-8614 connector form factor, more commonly known as “MiniSAS-HD” for its typical use in storage applications. The completion of the PCIe External Cabling Specification supports three successive generations of PCIe data rates (8.0, 16.0, and 32.0 GTs/s) and positions the PCIe technology as the fastest option among remote storage alternatives in the Enterprise market.
By providing internal and external cables of varying speeds and reach that utilize existing connector form factors, PCI-SIG is ensuring the industry can choose the best PCIe cabling solution for their applications. Utilizing existing, well-established connector form factors speeds up adoption, time to market and reduces cost. Along with CopprLink Internal and External specifications, PCI-SIG is exploring an optical interconnect for data-demanding applications like AI, cloud computing and HPC.
Attendees will receive an overview of current PCI-SIG cabling efforts, including CopprLink Internal and External Specifications and PCIe External Cabling Specification. They will also learn more about the application space for PCIe cabling, including data center networking, AI, storage and more.
In this talk, we'll delve into the transformative potential of selective write-grouping within software-defined storage (SDS) systems, setting it against the backdrop of the Popular Data Concentration (PDC) approach. Both strategies are compatible with Shingled Magnetic Recording (SMR) and Conventional Magnetic Recording (CMR) disks with an erasure coding. Write-grouping confines write operations to a limited subset of drives, allowing inactive drives to power down and slash energy consumption by up to 43% in SMR configurations. The write-grouping approach eliminates the staging phase of PDC, enabling immediate partitioning and placement of data onto specific groups upon arrival. Additionally, we will introduce the "diagonal" algorithm as an innovative method to rebalance write group data during SDS capacity expansion and DR events. Join us to explore the challenges of power-saving, scaling, and performance in SDS environments.
UCIe™ (Universal Chiplet Interconnect Express™) is an open industry that defines the interconnect offering high-bandwidth, low-latency, power-efficient, and cost-effective on-package connectivity between chiplets.
The UCIe 1.1 Specification delivers valuable improvements that extend reliability mechanisms, provide enhancements for the automotive industry, enable lower-cost implementations, and establish compliance and interoperability testing specifications to establish a vibrant chiplet ecosystem.
During this presentation, Debendra Das Sharma, UCIe Consortium Chairman, will provide an update on the UCIe Consortium and highlight progress in the specification.
Thinking of CXL® as “just a bus” does a disservice to its true value as an opportunity to virtualize system resources. NVMe Over CXL™ is an abstraction of memory resources that combines bulk storage and memory along with persistence to provide a highly efficient combined resource available over multiple APIs including NVMe, DAX, HDM, BAEBI, etc. Data center power is already at crisis levels, largely due to the inefficiency of data movement where far below 1% of data moved is actually used. NVMe Over CXL allows for a 97% or greater reduction in data movement through the fabric, saving power and enhancing system performance. An optional data persistence mode brings NVDIMM-N style backup to CXL, improved over traditional approaches by allowing the host to define regions of persistence.
Infrastructure Processing Units (IPUs) have traditionally been deployed as PCIe endpoints in servers and used to offload networking/storage/security duties from the CPU. An alternative deployment mode for IPUs is a stand-alone mode where the IPU acts as a server, presenting PCIe root-port interfaces to NVMe SSDs either directly or via a PCIe switch. Along with the IPU’s high-speed networking capabilities and the ability to run a Linux OS on its embedded Arm cores, the IPU has all the necessary ingredients to be a highly capable low-cost, low-power, and compact server. As with traditional servers, IPU servers may be clustered together to form an extensible platform allowing scale-out applications to run on it.
Apache Cassandra NoSQL database is one example application. Cassandra can scale to thousands of nodes making any per-server reduction in cost, power and size have a significant magnifying effect on datacenter efficiencies. Another characteristic of Cassandra and which makes it advantageous to IPU server clusters is that it works better with many thin nodes, i.e., low storage capacity nodes rather than fewer fat ones - minimizing compaction and garbage collection overhead. The low storage capacity requirement negates the need for an intervening PCIe switch between the IPU and the SSDs further reducing cost and complexity. Other scale-out applications such as ScyllaDB or Ceph block/object/file storage could also be potential applications to run on the cluster.
This presentation covers the development and path-finding work required to build and manage a multi-node IPU-based cluster and details performance tuning techniques for lowering database tail latency whilst keeping throughput high. As AI/ML applications drive ever-increasing storage capacity, clustered IPUs could provide a timely solution.
Today’s SSDs are reaching extreme capacities with some coming close to petabytes of storage capacity available to Hosts. However, the SSDs are commonly using embedded computing resources and embedded DRAM controllers to provide access to this massive quantity of storage. Some of the largest drives are leveraging an increased Indirection Unit (IU) size to extend the embedded resources of the drive.
This presentation will define an IU, and it will also explain how the embedded environment with IUs creates the rule of thumb for 1:1000 DRAM to NAND ratio. We’ll correct this rule of thumb together, and we will explore its limitations in the embedded environment. The presentation will continue to review some of the primary architectural options available to extend the limits of these embedded SSD environments. For example, an SSD might be split into several sub-domains. Alternatively, some DRAM selections and interaction models might be utilized. By the end of the presentation, attendees will know everything they need to start their career as an SSD architect for DRAM interactions. Most importantly attendees will understand why some changes are coming for these extreme capacity SSDs and some suggested SW changes to facilitate large IU drives in their storage stack.