Unlocking CXL's Potential Q&A

SNIA CMS Community

Apr 22, 2025

title of post

Compute Express Link® (CXL) is a groundbreaking technology that expands server memory beyond established limits and boosts bandwidth. CXL enables seamless memory sharing, reduces costs by optimizing resource utilization, and supports different memory types. In our Unlocking CXL’s Potential webinar, our speakers Arthur Sainio, SNIA Persistent Memory Special Interest Group Co-Chair, Jim Handy of Objective Analysis, Mahesh Natu of the CXL Consortium, and Torry Steed of SMART Modular Technologies discussed how CXL is transforming computing systems with its economic and performance benefits, and explored its future impact across various market segments

You can learn more about CXL development by attending CXL DevCon, April 29-30 2025 in Santa Clara CA. And as our webinar discussed, access and program CXL memory modules located in the SNIA Innovation Center with the training materials provided in the virtual SNIA Programming Workshop and Hackathon at www.snia.org/pmhackathon.

The audience was highly engaged and asked many interesting questions. Our Q&A takes care of the answers!  Feel free to reach out to us at askcms@snia.org if you have more questions. 

Q: How will the connections of a CXL switch be made physically possible with multiple hosts and multiple endpoints in real time?

A: It’s similar to PCI Express, in that you could have switches that have multiple upstream ports that connect to multiple CPUs, and they can have multiple downstream ports that connect to multiple devices. It's really a well-known thing that PCIe has championed so we don't see any challenges making those connections in the architecture.

Q: Is there standards-based work being done for CXL/PCIe over co-packaged optics for disaggregated computing? 

A: That’s a really good question.  There is work being done in PCI Express for transporting PCI over Optics (optical interface) so once that happens we will probably just leverage that. CXL leverages all of the things that PCIe does in terms of the physical layer or the form factor when possible so I expect that will happen as that is where we are heading to. 

Q: Will Token Service Providers (TSP) be used with Intel Trust Domain Extensions (TDX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) with TSP? 

A: Both technologies work with TSP. We have had great participation from Intel and AMD when defining TSP. What TSP does is define the interface between the CPU aspect of the TDX and the device. So that's the piece that is not part of the CPU architecture – the CPU definition - because it's outside of that and that's the piece that the TSP builds. Device vendors can build devices that then will follow the TSP specification and will be compatible with both of these technologies. There are other CPU vendors which have similar confidential compute technologies, and I think they can also be compatible with TSP. 

Q: What is TDISP and what is the difference from TSP? 

A: TDISP, or TEE Device Interface Security Protocolwas developed by PCI Express. Again, it solves the same problem for PCI Express meaning it will allow technologies like TDX to inspect a PCIe device, verify the device is in healthy good condition, and bring it into the test boundary of the TD. TSP does something similar for CXL devices. Obviously, with CXL being coherent it's a different problem to solve but I think we have solved the problem, and it's ready to be deployed.

Q: Why are we not seeing CXL Type-1 or -2 adoption by the industry?

A:  We think that it is coming - it's just not here yet. The big interest initially has just been Type-3 which is pure memory expansion but we are starting to see storage eventually moving to CXL as well. There's definite benefits there, and then I think we are starting to see memory with processing as well so it's coming in a next wave of CXL adoption.

The whole ecosystem has to come together and CXL is actually pretty new. We’re sure that there's an awful lot of work being done that has not been announced from mostly hardware manufacturers but also there needs to be an awful lot of software support to make everything fall into place. All of that comes together slowly and that forecast shown in the webinar actually starts with very modest growth simply because the rest of the support network for CXL needs to be put together before CXL can really take off.

Q: We see in-memory databases (IMDBs) as a big use case for CXL. But we have not seen any announcements from SAP HANA, Oracle or MS-SQL, or anybody adopting CXL with in-memory databases. Why is that? 

A: We have not seen anything specific to date.  We believe SAP HANA has published a paper and may have some work going on.  See https://www.vldb.org/pvldb/vol17/p3827-ahn.pdf. IMDBs would benefit from more capacity as they would like as much memory capacity as possible, so CXL is definitely something they would benefit from.

Q: Do we expect GPUs to support CXL? Without that, the AI use case seems highly limited.

A:  We don't really expect GPUs to support CXL and talking with CXL memory directly. Possibly once it's fully disaggregated it's possible you could have a layer where it is sharing information there. It’s more about if memory expansion for the system itself aids in AI use cases. We are seeing evidence that it does but how much that plays out we have to see.

We  haven't really spoken with GPU vendors about this but the understanding is that one of the benefits of CXL is that it makes it easy to fill either the Graphics Double Data Rate (GDDR) that's on the GPU board or to bring in data that can go into the High Bandwidth Memory (HBM) so it seems like there'd be an opportunity for that even if it's not something people are speaking about now.

There are two uses we see right now in GPU. The first one is when they want more memory.  They could use what we call the unordered IO feature of CXL to go over and reach right into CXL Type-3 memory and therefore get more memory expansion. The second use case is the GPU actually using CXL coherency to communicate with the CPU so they are cache coherent going into the CPU and they can quickly exchange data back and forth.  Again both require heavy lifting - lots of software enabling - but I I think those use cases do exist. It just takes effort to go enable those and get the benefits.

Finally, we hope you will join us for more SNIA webinars. Visit https://www.snia.org/webinars for the complete list of scheduled and on-demand webinars.  And check out the SNIA Educational Library for great content on CXL, memory, and much more.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Emerging Memories Branch Out – a Q&A

SNIA CMS Community

Feb 19, 2024

title of post
Our recent SNIA Persistent Memory SIG webinar explored in depth the latest developments and futures of emerging memories – now found in multiple applications both as stand-alone chips and embedded into systems on chips. We got some great questions from our live audience, and our experts Arthur Sainio, Tom Coughlin, and Jim Handy have taken the time to answer them in depth in this blog. And if you missed the original live talk, watch the video and download the PDF here. Q:  Do you expect Persistent Memory to eventually gain the speeds that exist today with DRAM? A:  It appears that that has already happened with the hafnium ferroelectrics that SK Hynix and Micron have shown.Ferroelectric memory is a very fast technology and with very fast write cycles there should be every reason for it to go that way. With the hooks that are in CXL™,  though, that shouldn’t be that much of a problem since it’s a transactional protocol. The reads, then, will probably rival DRAM speeds for MRAM and for resistive RAM (MRAM might get up to DRAM speeds with its writes too). In fact, there are technologies like spin-orbit torque and even voltage-controlled magnetic anisotropy that promise higher performance and also low write latency for MRAM technologies. I think that probably most applications are read intensive and so the read is the real place where the focus is, but it does look like we are going to get there. Q:  Are all the new Memory technology protocols (electrically) compatible to DRAM interfaces like DDR4 or DDR5? If not, then shouldn’t those technologies have lower chances of adoption as they add dependency on custom in-memory controller? A:  That’s just a logic problem.  There’s nothing innate about any memory technology that couples it tightly with any kind of a bus, and so because NOR Flash and SRAM are the easy targets so far, most emerging technologies have used a NOR flash or SRAM type interface.  However, in the future they could use DDR.  There’re some special twists because you don’t have to refresh emerging memory technologies. but you know in general they could use DDR. But one of the beauties of CXL is that you put anything you want to with any kind of interface on the other side of CXL and CXL erases what the differences are. It moderates them so although they may have different performances it’s hidden behind the CXL network.  Then the burden goes on to the CXL controller designers to make sure that those emerging technologies, whether it’s MRAM or others, can be adopted behind that CXL protocol. My expectation is for there to be a few companies early on who provide CXL controllers that that do have some kind of a specialty interface on them whether it’s for MRAM or for Resistive RAM or something like that, and then eventually for them to move their way into the mainstream.  Another interesting thing about CXL is that we may even see a hierarchy of different memories within CXL itself which also includes as part of CXL including domain specific processors or accelerators that operate close to memory, and so there are very interesting opportunities there as well. If you can do processing close to memory you lower the amount of data you’re moving around and you’re saving a lot of power for the computing system. Q: Emerging memory technologies have a byte-level direct access programming model, which is in contrast to block-based NAND Flash. Do you think this new programming model will eventually replace NAND Flash as it reduces the overhead and reduces the power of transferring Data? A: It’s a question of cost and that’s something that was discussed very much in our webinar. If you haven’t got a cost that’s comparable to NAND Flash, then you can’t really displace it.  But as far as the interface is concerned, the NAND interface is incredibly clumsy. All of these technologies do have both byte interfaces rather than a block interface but also, they can write in place – they don’t need to have a pre-erased block to write into. That from a technical standpoint is a huge advantage and now it’s just a question of whether or not they can get the cost down – which means getting the volume up. Q: Can you discuss the High Bandwidth Memory (HBM) trends? What about memories used with Graphic Processing Units (GPUs)? A: That topic isn’t the subject of this webinar as this webinar is about emerging memory technologies. But, to comment, we don’t expect to see emerging memory technologies adopt an HBM interface anytime in the really near future because HBM does springboard off DRAM and, as we discussed on one of the slides, DRAM has a transition that we don’t know when it’s going to happen that it goes to another emerging memory technology.  We’ve put it into the early 2030s in our chart, but it could be much later than that and HBM won’t convert over to an emerging memory technology until long after that. However, HBM involves stacking of chips and that ultimately could happen.  It’s a more expensive process right now –  a way of getting a lot of memory very close to a processor – and if you look at some of the NVIDIA applications for example,  this is an example of the Chiplet technology and HBM can play a role in those Chiplet technologies for GPUs..  That’s another area that’s going to be using emerging memories as well – in the Chiplets.  While we didn’t talk about that so much in this webinar, it is another place for emerging memories to be playing a role. There’s one other advantage to using an emerging memory that we did not talk about: emerging memories don’t need refresh. As a matter of fact, none of the emerging memory technologies need refresh. More power is consumed by DRAM refreshing than by actual data accesses.  And so, if you can cut that out of it,  you might be able to stack more chips on top of each other and get even more performance, but we still wouldn’t see that as a reason for DRAM to be displaced early on in HBM and then later on in the mainstream DRAM market.  Although, if you’re doing all those refreshes there’s a fair amount of potential of heat generation by doing that, which may have packaging implications as well. So, there may be some niche areas in there which could be some of the first ways in which some of these emerging memories are potentially used for those kinds of applications, if the performance is good enough. Q:  Why have some memory companies failed?  Apart from the cost/speed considerations you mention, what are the other minimum envelope features that a new emerging memory should have? Is capacity (I heard 32Gbit multiple times) one of those criteria? A: Shipping a product is probably the single most important activity for success. Companies don’t have to make a discrete or standalone SRAM or emerging memory chip but what they need to do is have their technology be adopted by somebody who is shipping something if they’re not going to ship it themselves.  That’s what we see in the embedded market as a good path for emerging memory IP: To get used and to build up volume. And as the volume and comfort with manufacturing those memories increase, it opens up the possibility down the road of lower costs with higher volume standalone memory as well. Q:  What are the trends in DRAM interfaces?  Would you discuss CXL’s role in enabling composable systems with DRAM pooling? A:  CXL, especially CXL 3.0, has particularly pointed at pooling. Pooling is going to be an extremely important development in memory with CXL, and it’s one of the reasons why CXL probably will proliferate. It allows you to be able to allocate memory which is not attached to particular server CPUs and therefore to make more efficient and effective use of those memories. We mentioned this earlier when we said that right now DRAM is that memory with some NAND Flash products out there too. But this could expand into other memory technologies behind CXL within the CXL pool as well as accelerators (domain specific processors) that do some operations closer to where the memory lives. So, we think there’s a lot of possibilities in that pooling for the development and growth of emerging memories as well as conventional memories. Q: Do you think molecular-based technologies (DNA or others) can emerge in the coming years as an alternative to some of the semiconductor-based memories? A: DNA and other memory technologies are in a relatively early stage but there are people who are making fairly aggressive plans on what they can do with those technologies. We think the initial market for those molecular memories are not in this high performance memory application; but especially with DNA, the potential density of storage and the fact that you can make lots of copies of content by using genetic genomic processes makes them very attractive potentially for archiving applications.  The things we’ve seen are mostly in those areas because of the performance characteristics. But the potential density that they’re looking at is actually aimed at that lower part of the market, so it has to be very, very cost effective to be able to do that, but the possibilities are there.  But again, as with the emerging high performance memories, you still have the economies of scale you have to deal with – if you can’t scale it fast enough the cost won’t go down enough that will actually will be able to compete in those areas. So it faces somewhat similar challenges, though in a different part of the market. Earlier in the webcast, we said when showing the orb chart, that for something to fit into the computing storage hierarchy it has to be cheaper than the next faster technology and faster than the next cheaper technology. DNA is not a very fast technology and so that automatically says it has to be really cheap for it to catch on and that puts it in a very different realm than the emerging memories that we’re talking about here. On the other hand, you never know what someone’s going to discover, but right now the industry doesn’t know how to make fast molecular memories. Q:  What is your intuition on how tomorrow’s highly dense memories might impact non-load/store processing elements such as AI accelerators? As model sizes continue to grow and energy density becomes more of an issue, it would seem like emerging memories could thrive in this type of environment. Your thoughts? A:  Any memory would thrive in an environment where there was an unbridled thirst for memory. as artificial intelligence (AI) currently is. But AI is undergoing some pretty rapid changes, not only in the number of the parameters that are examined, but also in the models that are being used for it. We recently read a paper that was written by Apple* where they actually found ways of winnowing down the data that was used for a large language model into something that would fit into an Apple MacBook Pro M2 and they were able to get good performance by doing that.  They really accelerated things by ignoring data that didn’t really make any difference. So, if you take that kind of an approach and say: “Okay.  If those guys keep working on that problem that way, and they take it to the extreme, then you might not need all that much memory after all.”  But still, if memory were free, I’m sure that there’d be a ton of it out there and that is just a question of whether or not these memories can get cheaper than DRAM so that they can look like they’re free compared to what things look like today. There are three interesting elements of this:  First, CXL, in addition allowing mixing of memory types, again allows you to put in those domain specific processors as well close to the memory. Perhaps those can do some of the processing that’s part of the model, in which case it would lower the energy consumption. The other thing it supports is different computing models than what we traditionally use. Of course there is quantum computing, but there also is something called neural networks which actually use the memory as a matrix multiplier, and those are using these emerging memories for that technology which could be used for AI applications.  The other thing that’s sort of hidden behind this is that spin tunnelling is changing processing itself in that right now everything is current-based, but there’s work going on in spintronic based devices that instead of using current would use the spin of electrons for moving data around, in which case we can avoid resistive heating and our processing could run a lot cooler and use less energy to do so.  So, there’s a lot of interesting things that are kind of buried in the different technologies being used for these emerging memories that actually could have even greater implications on the development of computing beyond just the memory application themselves.  And to elaborate on spintronics, we’re talking about logic and not about spin memory – using spins rather than that of charge which is current. Q:  Flash has an endurance issue (maximum number of writes before it fails). In your opinion, what is the minimum acceptable endurance (number of writes) that an emerging memory should support? It’s amazing how many techniques have fallen into place since wear was an issue in flash SSDs.  Today’s software understands which loads have high write levels and which don’t, and different SSDs can be used to handle the two different kinds of load.  On the SSD side, flash endurance has continually degraded with the adoption of MLC, TLC, and QLC, and is sometimes measured in the hundreds of cycles.  What this implies is that any emerging memory can get by with an equally low endurance as long as it’s put behind the right controller. In high-speed environments this isn’t a solution, though, since controllers add latency, so “Near Memory” (the memory tied directly to the processor’s memory bus) will need to have higher endurance.  Still, an area that can help to accommodate that is the practice of putting code into memories that have low endurance and data into higher-endurance memory (which today would be DRAM).  Since emerging memories can provide more bits at a lower cost and power than DRAM, the write load to the code space should be lower, since pages will be swapped in and out more frequently.  The endurance requirements will depend on this swapping, and I would guess that the lowest-acceptable level would be in the tens of thousands of cycles. Q: It seems that persistent memory is more of an enterprise benefit rather than a consumer benefit. And consumer acceptance helps the advancement and cost scaling issues. Do you agree? I use SSDs as an example. Once consumers started using them, the advancement and prices came down greatly. Anything that drives increased volume will help.  In most cases any change to large-scale computing works its way down to the PC, so this should happen in time here, too. But today there’s a growing amount of MRAM use in personal fitness monitors, and this will help drive costs down, so initial demand will not exclusively come from enterprise computing. At the same time, the IBM FlashDrive that we mentioned uses MRAM, too, so both enterprise and consumer are already working to simultaneously grow consumption. Q: The CXL diagram (slide 22 in the PDF) has 2 CXL switches between the CPUs and the memory. How much latency do you expect the switches to add, and how does that change where CXL fits on the array of memory choices from a performance standpoint? The CXL delay goals are very aggressive, but I am not sure that an exact number has been specified.  It’s on the order of 70ns per “Hop,” which can be understood as the delay of going through a switch or a controller. Naturally, software will evolve to work with this, and will move data that has high bandwidth requirements but is less latency-sensitive to more remote areas, while managing the more latency-sensitive data to near memory. Q: Where can I learn more about the topic of Emerging Memories? Here are some resources to review   * LLM in a Flash: Efficient Large Language Model Inference with Limited Memory, Kevin Avizalideh, et. al.,             arXiv:2312.11514 [cs.CL] The post Emerging Memories Branch Out – a Q&A first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Persistent Memory, CXL, and Memory Tiering

SNIA CMS Community

Jul 10, 2023

title of post
With the persistent memory ecosystem continuing to evolve with new interconnects like CXL™ and applications like memory tiering, our recent Persistent Memory, CXL, and Memory Tiering-Past, Present, and Future webinar was a big success.  If you missed it, watch it on demand HERE! Many questions were answered live during the webinar, but we did not get to all of them.  Our moderator Jim Handy from Objective Analysis, and experts Andy Rudoff and Bhushan Chithur from Intel, David McIntyre from Samsung, and Sudhir Balasubramanian and Arvind Jagannath from VMware have taken the time to answer them in this blog. Happy reading! Q: What features or support is required from a CXL capable endpoint to e.g. an accelerator to support the memory pooling? Any references? A: You will have two interfaces, one for the primary memory accesses and one for the management of the pooling device. The primary memory interface is the .mem and the management interface will be via the .io or via a sideband interface. In addition you will need to implement a robust failure recovery mechanism since the blast radius is much larger with memory pooling. Q: How do you recognize weak information security (in CXL)? A: CXL has multiple features around security and there is considerable activity around this in the Consortium.  For specifics, please see the CXL Specification or send us a more specific question. Q: If the system (e.g. x86 host) wants to deploy CXL memory (Type 3) now, is there any OS kernel configuration, BIO configuration to make the hardware run with VMWare (ESXi)? How easy or difficult this setup process? A: A simple CXL Type 3 Memory Device providing volatile memory is typically configured by the pre-boot environment and reported to the OS along with any other main memory.  In this way, a platform that supports CXL Type 3 Memory can use it without any additional setup and can run an OS that contains no CXL support and the memory will appear as memory belonging to another NUMA code.  That said, using an OS that does support CXL enables more complex management, error handling, and more complex CXL devices. Q: There was a question on ‘Hop” length. Would you clarify? A: In the webinar around minute 48, it was stated that a Hop was 20ns, but this is not correct. A Hop is often spoken of as “Around 100ns.”  The Microsoft Azure Pond paper quantifies it four different ways, which range from 85ns to 280ns. Q: Do we have any idea how much longer the latency will be?   A: The language CXL folks use is “Hops.”   An address going into CXL is one Hop, and data coming back is another.  In a fabric it would be twice that, or four Hops.  The  latency for a Hop is somewhere around 100ns, although other latencies are accepted. Q: For memory semantic SSD:  There appears to be a trend among 2LM device vendors to presume the host system will be capable of providing telemetry data for a device-side tiering mechanism to decide what data should be promoted and demoted.  Meanwhile, software vendors seem to be focused on the devices providing telemetry for a host-side tiering mechanism to tell the device where to move the memory.  What is your opinion on how and where tiering should be enforced for 2LM devices like a memory semantic SSD? A: Tiering can be managed both by the host and within computational storage drives that could have an integrated compute function to manage local tiering- think edge applications. Q: Re VM performance in Tiering: It appears you’re comparing the performance of 2 VM’s against 1.  It looked like the performance of each individual VM on the tiering system was slower than the DRAM only VM.  Can you explain why we should take the performance of 2 VMs against the 1 VM?  Is the proposal that we otherwise would have required those 2 VM’s to run on separate NUMA node, and now they’re running on the same NUMA node? A: Here the use case was, lower TCO & increased capacity of memory along with aggregate performance of VM’s v/s running few VM’s on DRAM. In this use case, the DRAM per NUMA Node was 384GB, the Tier2 memory per NUMA node was 768GB. The VM RAM was 256GB. In the DRAM only case, if we have to run business critical workloads e.g., Oracle with VM RAM=256GB,  we could only run 1 VM (256GB) per NUMA Node (DRAM=384GB), we cannot over-provision memory in the DRAM only case as every NUMA node has 384GB only. So potentially we could run 4 such VM’s (VM RAM=256Gb) in this case with NUMA node affinity set as we did in this use case OR if we don’t do NUMA node affinity, maybe 5 such VM’s without completely maxing out the server RAM.  Remember, we did NUMA node affinity in this use case to eliminate any cross NUMA latency.78 Now with Tier2 memory in the mix, each NUMA node has 384GB DRAM and 768GB Tier2 Memory, so theoretically one could run 16-17 such VM’s (VM RAM =256GB), hence we are able to increase resource maximization, run more workloads, increase transactions etc , so lower TCO, increased capacity and aggregate performance improvement. Q: CXL is changing very fast, we have 3 protocol versions in 2 years, as a new consumer of CXL what are the top 3 advantages of adopting CXL right away v/s waiting for couple of more years? A: All versions of CXL are backward compatible.  Users should have no problem using today’s CXL devices with newer versions of CXL, although they won’t be able to take advantage of any new features that are introduced after the hardware is deployed. Q: (What is the) ideal when using agilex FPGAs as accelerators? A: CXL 3.0 supports multiple accelerators via the CXL switching fabric. This is good for memory sharing across heterogeneous compute accelerators, including FPGAs. Thanks again for your support of SNIA education, and we invite you to write askcmsi@snia.org for your ideas for future webinars and blogs! The post Your Questions Answered on Persistent Memory, CXL, and Memory Tiering first appeared on SNIA Compute, Memory and Storage Blog.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Management of Storage and Fabrics

Richelle Ahlvers

Apr 19, 2023

title of post

Composable disaggregated infrastructures (CDI) provide a promising solution to address the provisioning and computational efficiency limitations, as well as hardware and operating costs, of integrated, siloed, systems. But how do we solve these problems in an open, standards-based way?

DMTF, SNIA, the OFA, and the CXL Consortium are working together to provide elements of the overall solution, with Redfish® and SNIA Swordfish manageability providing the standards-based interface.

The OpenFabrics Alliance (OFA) is developing an OpenFabrics Management Framework (OFMF) designed for configuring fabric interconnects and managing composable disaggregated resources in dynamic HPC infrastructures using client-friendly abstractions.

Want to learn more? On Wednesday, May 17, 2023, SNIA Storage Management Initiative (SMI) and the OFA are hosting a live webinar entitled “Casting the Net: Scaling Management of Storage and Fabrics” to share use cases for scaling management of storage and fabrics and beyond.

They’ll dive into:

  • What is CDI? Why should you care? How will it help you?
  • Why does standards-based management help?
  • What does a management framework for CDI look like?
  • How can you get involved? How can your engagement accelerate solutions?

In under an hour, this webinar will give you a solid understanding of how SMI and OFA, along with other alliance partners, are creating the approaches and standards to solve the puzzle of how to effectively address computational efficiency limitations.

Register here to join us on May 17th.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Scaling Management of Storage and Fabrics

Richelle Ahlvers

Apr 19, 2023

title of post
Composable disaggregated infrastructures (CDI) provide a promising solution to address the provisioning and computational efficiency limitations, as well as hardware and operating costs, of integrated, siloed, systems. But how do we solve these problems in an open, standards-based way? DMTF, SNIA, the OFA, and the CXL Consortium are working together to provide elements of the overall solution, with Redfish® and SNIA Swordfish™ manageability providing the standards-based interface. The OpenFabrics Alliance (OFA) is developing an OpenFabrics Management Framework (OFMF) designed for configuring fabric interconnects and managing composable disaggregated resources in dynamic HPC infrastructures using client-friendly abstractions. Want to learn more? On Wednesday, May 17, 2023, SNIA Storage Management Initiative (SMI) and the OFA are hosting a live webinar entitled “Casting the Net: Scaling Management of Storage and Fabrics” to share use cases for scaling management of storage and fabrics and beyond. They’ll dive into:
  • What is CDI? Why should you care? How will it help you?
  • Why does standards-based management help?
  • What does a management framework for CDI look like?
  • How can you get involved? How can your engagement accelerate solutions?
In under an hour, this webinar will give you a solid understanding of how SMI and OFA, along with other alliance partners, are creating the approaches and standards to solve the puzzle of how to effectively address computational efficiency limitations. Register here to join us on May 17th. The post Scaling Management of Storage and Fabrics first appeared on SNIA Storage Management Blog.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

David McIntyre

Jan 19, 2023

title of post
How are Compute Express Link™ (CXL™) and the SNIA Smart Data Accelerator Interface (SDXI) related? It’s a topic we covered in detail at our recent SNIA Networking Storage Forum webcast, “What’s in a Name? Memory Semantics and Data Movement with CXL and SDXI” where our experts, Rita Gupta and Shyam Iyer, introduced both SDXI and CXL, highlighted the benefits of each, discussed data movement needs in a CXL ecosystem and covered SDXI advantages in a CXL interconnect. If you missed the live session, it is available in the SNIA Educational Library along with the presentation slides. The session was highly rated by the live audience who asked several interesting questions. Here are answers to them from our presenters Rita and Shyam. Q. Now that SDXI v1.0 is out, can application implementations use SDXI today? A. Yes. Now that SDXI v1.0 is out, implementations can start building to the v1.0 SNIA standard. If you are looking to influence a future version of the specification, please consider joining the SDXI Technical Working Group (TWG) in SNIA. We are now in the planning process for post v1.0 features so we welcome all new members and implementors to come participate in this new phase of development. Additionally, you can use the SNIA feedback portal to provide your comments. Q. You mentioned SDXI is interconnect-agnostic and yet we are talking about SDXI and a specific interconnect here i.e. CXL. Is SDXI architected to work on CXL? A. SDXI is designed to be interconnect agnostic. It standardizes the memory structures, function setup, control, etc. to make sure that a standardized mover can have an architected global state. It does not preclude an implementation from taking advantage of the features of an underlying interconnect. CXL will be an important instance which is why it was a big part of this presentation. Q. I think you covered it in the talk, but can you highlight some specific advantages for SDXI in a CXL environment and some ways CXL can benefit from an SDXI standardized data mover? A. CXL-enabled architecture expands the targetable System Memory space for an architected memory data mover like SDXI. Also, as I explained, SDXI implementors have a few unique implementation choices in a CXL-based architecture that can further improve/optimize data movement. So, while SDXI is interconnect agnostic, SDXI and CXL can be great buddies :-). With CXL concepts like “shared memory” and “pooled memory,” SDXI can now become a multi-host data mover. This is huge because it eliminates a lot of software stack layers to perform both intra-host and inter-host bulk data transfers. Q. CXL is termed as low latency, what are the latency targets for CXL devices? A. While overall CXL device latency targets may depend on the media, the guidance is to have CXL access latency to be within one NUMA hop. In other words, the CXL memory access should have similar latency to that of remote socket DRAM access. Q. How are SNIA and CXL collaborating on this? A. SNIA and CXL have a marketing alliance agreement that allows SNIA and CXL to work on joint marketing activities such as this webcast to promote collaborative work. In addition, many of the contributing companies are members of both CXL and the SNIA SDXI TWG. This helps in ensuring the two groups stay connected. Q. What is the difference in memory pooling and memory sharing? What are the advantages of either? A. Memory pooling (also referred to as memory disaggregation) is an approach where multiple hosts allocate dedicated memory resources from the pool of CXL memory device(s) dynamically, as needed. The memory resources are allocated to one host at any given time. The technique ensures optimum and efficient usage of expensive memory resources, providing TCO advantage. In a memory sharing usage model, allocated blocks of memory can be used by multiple hosts at the same time. The memory sharing provides optimum usage of memory resources and also provides efficiency on memory allocation and management. Q. CXL is termed as low latency, what are the latency targets for CXL devices? Can SDXI enable the data movement across the CXL devices in peer-to-peer fashion? A. Yes. Indeed. SDXI devices can target all memory regions accessible to the host and among other usage models perform data movement across CXL devices in a peer-to-peer fashion. Of course, this assumes a few implications around platform support, but SDXI is designed for such data movement use cases as well. Q. Trying to look for equivalent terms…can you think of SDXI as what NVMe® is for NVMe-oF™ and CXL as the underlying transport fabric like TCP? A. There are some similarities, but the use cases are very different and therefore I suspect the implementations would drive the development of these standards very differently. Like NVMe which defines various opcodes to perform storage operations, SDXI defines various opcodes to perform memory operations. And it is also true that SDXI opcodes/descriptors can be used to move data using PCIe and CXL as the I/O interconnect and a future expansion to ethernet based interconnects can be envisioned. Having said that, memory operations have different SLAs, performance characteristics, byte addressability concerns, and ordering requirements among other things. SDXI is enabling a new class of such devices. Q. Is there a limitation on the granularity size of transfer – SDXI is limited to bulk transfers only or does it also address small granular transfers? A. As a standard specification, SDXI allows implementations to process descriptors for data transfer sizes ranging from 1 Byte to 4GB. That said, the software may use size thresholds to determine offloading data transfers via SDXI devices based on implementation quality. Q. Will there be a standard SDXI driver available from SNIA or is each company responsible for building a driver to be compatible with the SDXI compatible hardware they build? A. The SDXI TWG is not developing the common open-source driver because of license considerations in SNIA. The SDXI TWG is beginning to work on a common user-space open-source library for applications. The SDXI spec is enabling the development of a common class level driver by reserving a class code with PCI SIG for PCIe based implementations. The driver implementations are being enabled and influenced with discussions in the SDXI TWG and other forums. Q. Software development is throttled by the availability of standard CXL host platforms. When will those be available and for what versions? A. We cannot comment on specific product/platform availability and would advise to connect with the vendors for the same. There is CXL1.1 based host platform available in the market and publicly announced. Q. Does a PCIe based data mover with an SDXI interface actually DMA data across the PCIe link?  If so, isn’t this higher latency and less power efficient than a memcpy operation? A. There is quite a bit of prior art research within academia and industry that indicates that for certain data transfer size thresholds, an offloaded data movement device like an SDXI device can be more performant than employing a CPU thread. While software can employ more CPU threads to do the same operation via memcpy it comes at a cost. By offloading them to SDXI devices, expensive CPU threads can be used for other computational tasks helping improve overall TCO. Certainly, this will depend on implementation quality, but SDXI is enabling such innovations with a standardized framework. Q. Will SDXI impact/change/unify NVMe? A. SDXI is expected to complement the data movement and acceleration needs of systems comprising NVMe devices as well as needs within an NVMe subsystem to improve storage performance. In fact, SNIA has created a subgroup, the “CS+SDXI” subgroup that is comprised of members of SNIA’s Computational Storage TWG and SDXI TWG to think about such kinds of use cases. Many computational storage use cases can be enhanced with a combination of NVMe and SDXI-enabled technologies.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Memory Semantics and Data Movement with CXL and SDXI

David McIntyre

Nov 11, 2022

title of post
Using software to perform memory copies has been the gold standard for applications performing memory-to-memory data movement or system memory operations. With new accelerators and memory types enriching the system architecture, accelerator-assisted memory data movement and transformation need standardization. At the forefront of this standardization movement is the SNIA Smart Data Accelerator Interface (SDXI) which is designed as an industry-open standard that is Extensible, Forward-compatible, and Independent of I/O interconnect technology. Adjacently, Compute Express Link™ (CXL™) is an industry-supported Cache-Coherent Interconnect for Processors, Memory Expansion, and Accelerators. CXL is designed to be an industry-open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning. How are these two standards related? What are the unique advantages of each? Find out on November 30, 2022 in our next SNIA Networking Storage Forum webcast “What’s in a Name? Memory Semantics and Data Movement with CXL and SDXI” where SNIA and CXL experts working to develop these standards will:
  • Introduce SDXI and CXL.
  • Discuss data movement needs in a CXL ecosystem
  • Cover SDXI advantages in a CXL interconnect
Please join us on November 30th to learn more about these exciting technologies.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to CXL