With a topic like Emerging Memory Poised to Explode, no wonder this SNIA Solid State Storage Initiative webcast generated so much interest! Our audience had some great questions, and, as promised, our experts Tom Coughlin and Jim Handy provide the answers in this blog. Read on, and join SNIA at the Persistent Memory Summit January 24, 2019 in Santa Clara CA. Details and complimentary registration are at www.snia.org/pm-summit.
Q. Can you mention one or two key applications leading the effort to leverage Persistent Memory?
A. Right now the main applications for Persistent Memory are in Storage Area Networks (SANs), where NVDIMM-Ns (Non-Volatile Dual In-line Memory Modules) are being used for journaling. SAP HANA, SQLserver, Apache Ignite, Oracle RDBMS, eXtremeDB, Aerospike, and other in-memory databases are undergoing early deployment with NVDIMM-N and with Intel’s Optane DIMMs in hyperscale datacenters. IBM is using Everspin Magnetoresistive Random-Access Memory (MRAM) chips for higher-speed functions (write cache, data buffer, streams, journaling, and logs) in certain Solid State Drives (SSDs), following a lead taken by Mangstor. Everspin’s STT MRAM DIMM is also seeing some success, but the company’s not disclosing a lot of specifics.
Q. I believe that anyone who can ditch the batteries for NVDIMM support will happily pay a mark-up on 3DXP DIMMs should Micron offer them.
A: Perhaps that’s true. I think that Micron, though, is looking for higher-volume applications. Micron is well aware of the size of the NVDIMM-N market, since the company is an important NVDIMM supplier. Everspin is probably also working on this opportunity, since its STT MRAM DIMM is similar, although at a significantly higher price than Dynamic Random Access Memory (DRAM).
Volume is the key to more applications for 3DXPoint DIMMs and any other memory technology. It may be that the rise of Artificial Intelligence (AI) applications will help drive the greater use of many of these fast Non-Volatile Memories.
Q. Any comments on HPE's Memristor?
A: HPE went very silent on the Memristor at about the same time that the 3D XPoint Memory was introduced. The company explained in 2016 that the first generation of “The Machine” would use DRAM instead of the Memristor. This leads us to suspect that 3D XPoint turned some heads at HPE. One likely explanation is that HPE by itself would have a very difficult time reaching the scale required to bring the Memristor’s cost to the necessary level to justify its use.
Q. Do you expect NVDIMM-N will co-exist into the future with other storage class memories because of its speed and essentially unlimited endurance of DRAM?
A: Yes. The NVDIMM-N should continue to appeal to certain applications, especially those that value its technical attributes enough to offset its higher-than-DRAM price.
Q. What are Write/Erase endurance limitations of PCM and STT? (vis a vis DRAM's infinite endurance)?
A: Intel and Micron have never publicly disclosed their endurance figures for 3D XPoint, although Jim Handy has backed out numbers in his Memory Guy blog (http://TheMemoryGuy.com/examining-3d-xpoints-1000-times-endurance-benefit/). His calculations indicate an endurance of more than 30K erase/write cycles, but the number could be significantly lower than this since SSD controllers do a good job of reducing the number of writes that the memory chip actually sees. There’s an SSD guy series on this: http://thessdguy.com/how-controllers-maximize-ssd-life/, also available as a SNIA SSSI TechNote. Everspin’s EMD3D256M STT MRAM specification lists an endurance of 10^10 cycles.
Q. Your thoughts on Nanotube RAM (NRAM)?
A: Although the nanotube memory is very interesting it is only one member in a sea of contenders for the Persistent Memory crown. It’s very difficult to project the outcome of a device that’s not already in volume production.
Q. Will Micron commercialize 3D XPoint? I do not see them in the market as much as Intel on Optane.
A: Micron needs a clear path to profitability to rationalize entering the 3D XPoint market whereas Intel can justify losing money on the technology. Learn why in an upcoming post on The Memory Guy blog.
Thanks again to the bearded duo and their moderator, Alex McDonald, SNIA Solid State Storage Initiative Co-Chair! Bookmark the SNIA Brighttalk webcast link for more great webcasts in 2019!
Don't miss your chance to attend the SNIA's 7th Annual Persistent Memory Summit, co-located with the SNIA Annual Members’ Meeting on January 24, 2019 at a new location – Hyatt Regency Santa Clara CA. This innovative one-day event brings together industry leaders, solution providers, and users of technology to understand the ecosystem driving system memory and storage into a single, unified “persistent memory” entity. Agenda topics include Enabling Persistent Memory through the Operating System and Interpreted Languages; PM Solutions, Interfaces, and Media; and the NVM Programming Model in the Real World. The final agenda will be live later this month so stay tuned!
Many thanks to SNIA member Intel Corporation and the SNIA Solid State Storage Initiative for underwriting the Summit. New to the Summit in 2019 is an evening networking reception and a new, expanded demonstration area. Gold and Demonstration sponsor opportunities are now available. Complimentary registration is now open - visit www.snia.org/pm-summit to sign up, check out videos of 2018 sessions, and learn how to showcase your PM solutions at the event.
For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here Persistent Memory over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA data reads or writes from/to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target.
Join the Networking Storage Forum (NSF) on October 25, 2018 for out next live webcast, Extending RDMA for Persistent Memory over Fabrics. In this webcast, we will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system. Learn:
Why we can't just treat PM just like traditional storage or volatile memory
What happens when you write to memory over RDMA
Which programming model and protocol changes are required for PMoF
How proposed RDMA extensions for PM would work
We believe this webcast will appeal to developers of low-latency and/or high-availability datacenter storage applications and be of interest to datacenter developers, administrators and users. I encourage you to register today. Our NSF experts will be on hand to answer you questions. We look forward to your joining us on October 25th.
SNIA thanks and celebrates the many hardworking SNIA member volunteers whose technical work was awarded Best of Show at the recent Flash Memory Summit.
[caption id="attachment_884" align="alignright" width="150"] Jennifer Dietz and Eden Kim accept FMS award from Jay Kramer[/caption]
SNIA won the FMS Most Innovative Flash Memory Technology Award, recognizing innovations that will change the way flash memory works and is used in products, for the SNIA Technical Position Real World Storage Workloads Performance Test Specification (RWSW PTS), developed by the SNIA Solid State Storage Technical Work Group (SSS TWG). “Real World Workloads are important for Data Center, IT, and Storage professionals,” said Eden Kim, Chair of the SSS TWG, and CEO of SNIA member company Calypso Systems “because real world workloads are very different from synthetic lab workloads and are key determinants in datacenter server and storage performance, optimization and qualification.” Eden and Jennifer Dietz of SNIA member company Intel and Co-Chair of the SNIA Solid State Storage Initiative Marketing Committee accepted the award from Jay Kramer of Flash Memory Summit.
[caption id="attachment_885" align="alignleft" width="150"] Mark Carlson and Bill Martin accept award on behalf of SNIA from Jay Kramer[/caption]
SNIA also won the FMS Best of Show Technology Innovation Award, recognizing that cloud and other large data centers typically prioritize their selection criteria for storage solutions as those that can achieve the highest possible performance while avoiding proprietary vendor lock-in. SNIA and EXTEN HyperDynamic™ NVMe over Fabrics high-performance storage software were recognized for creating an open storage management specification that works with EXTEN storage software for being the first in the industry to provide a solution based on SNIA Swordfish™ and DMTF Redfish® specifications. “We congratulate EXTEN Technologies for its innovation and well-deserved accolade,” said Don Deel, SNIA Storage Management Initiative Governing Board Chair. “By integrating SNIA Swordfish into its solution, EXTEN Technologies’ customers will benefit from a standards-based API that does not require learning the intricacies of storage infrastructure to handle day-to-day storage needs.” Accepting the award for SNIA at FMS were Mark Carlson of SNIA member company Toshiba Memory Systems and Bill Miller of SNIA member company Samsung Electronics, Co-Chairs of the SNIA Technical Council.
Congratulations to all the SNIA volunteers who participated in the development of these award-winning specifications.
SNIA Sessions at FMS Now Available for Viewing and Download
Also at Flash Memory Summit, SNIA work and volunteers were on display in sessions on persistent memory (PM), solid state storage, form factors, and testing. A two-day PM track featured talks on advances in PM, PM hardware, PM software and applications, and remote persistent memory (PMEM-101-1; PMEM -102-1; PMEM-201-1; and PMEM-202-1).
SNIA is now partnering with the Enterprise and Datacenter SSD Form Factor Working Group (EDSFF) on form factors and a Wednesday session outlined their advances (SSD-201-1). SNIA also presented a preconference seminar (G) on bringing your SSD testing up to date, and a SNIA Education afternoon with sessions on flash storage, programming and networking, buffers, queues, and caches; and a BoF on PM futures. Check out all these sessions and more on the Flash Memory Summit proceedings page.
SNIA Executive Director Michael Oros shared SNIA strategic directions and areas of focus in a FMS main stage presentation, available here.
SNIA also presented updates on their work in Persistent Memory, Solid State Storage, and alliances at a well-attended reception on Monday evening. The SSSI honored Doug Voigt, co-chair of the NVM Programming Technical Work Group, for his contributions to SNIA and the NVM Programming Model.We continued our discussions on the exhibit floor featuring JEDEC-compliant NVDIMM-Ns from SNIA Persistent Memory and NVDIMM SIG members AgigA Tech, Micron, Netlist, SMART Modular Technologies, and Viking in a Supermicro box running an open source performance demonstration. If you missed it, the SIG will showcase a similar demonstration at the upcoming SNIA Storage Developer Conference September 24-27, 2018, and the SNIA Persistent Memory Summit January 24, 2019 at the Hyatt Santa Clara. Register now for both events!
By Paul Grun, Chair, OpenFabrics Alliance and Senior Technologist, Cray, Inc.
Remote Persistent Memory, (RPM), is rapidly emerging as an important new technology. But understanding a new technology, and grasping its significance, requires engagement across a wide range of industry organizations, companies, and individuals. It takes a village, as they say.
Technologies that are capable of bending the arc of server architecture come along only rarely. It’s sometimes hard to see one coming because it can be tough to discern between a shiny new thing, an insignificant evolution in a minor technology, and a serious contender for the Technical Disrupter of the Year award. Remote Persistent Memory is one such technology, the ultimate impact of which is only now coming into view. Two relatively recent technologies serve to illustrate the point: The emergence of dedicated, high performance networks beginning in the early 2000s and more recently the arrival of non-volatile memory technologies, both of which are leaving a significant mark on the evolution of computer systems. But what happens when those two technologies are combined to deliver access to persistent memory over a fabric? It seems likely that such a development will positively impact the well-understood memory hierarchies that are the basis of all computer systems today. And that, in turn, could cause system architects and application programmers to re-think the way that information is accessed, shared, and stored. To help us bring the subject of RPM into sharp focus, there is currently a concerted effort underway to put some clear definition around what is shaping up to be a significant disrupter.
For those who aren’t familiar, Remote Persistent Memory refers to a persistent memory service that is accessed over a fabric or network. It may be a service shared among multiple users, or dedicated to one user or application. It’s distinguished from local Persistent Memory, which refers to a memory device attached locally to the processor via a memory or I/O bus, in that RPM is accessed via a high performance switched fabric. For our purposes, we’ll further refine our discussion to local fabrics, neglecting any discussion of accessing memory over the wide area.
Most important of all, Persistent Memory, including RPM, is definitely distinct from storage, whether that is file, object or block storage. That’s why we label this as a ‘memory’ service - to distinguish it from storage. The key distinction is that the consumer of the service recognizes and uses it as it would any other level in the memory hierarchy. Even though the service could be implemented using block or file-oriented non-volatile memory devices, the key is in the way that an application accesses and uses the service. This isn’t faster or better storage, it’s a whole different kettle of fish.
So how do we go about discovering the ultimate value of a new technology like RPM? So far, a lively discussion has been taking place across multiple venues and industry events. These aren’t ad hoc discussions nor are they tightly scripted events; they are taking place in a loosely organized fashion designed to encourage lots of participation and keep the ball moving forward. Key discussions on the topic have hopscotched from the SNIA’s Storage Developers Conference, to SNIA/SSSI’s Persistent Memory Summit, to the OpenFabrics Alliance (OFA) Workshop and others. Each of these industry events has given us an opportunity for the community at large to discuss and develop the essential ideas surrounding RPM. The next installment will occur at the upcoming Flash Memory Summit in August where there will be four sessions all devoted to discussing Remote Persistent Memory.
Having frequent industry gatherings is a good thing, naturally, but that by itself doesn’t answer the question of how we go about progressing a discussion of Remote Persistent Memory in an orderly way. A pretty clear consensus has emerged that RPM represents a new layer in the memory hierarchy and therefore the best way to approach it is to take a top-down perspective. That means starting with an examination of the various ways that an application could leverage this new player in the memory hierarchy. The idea is to identify and explore several key use cases. Of course, the technology is in its early infancy, so we’re relying on the best instincts of the industry at large to guide the discussion.
Once there is a clear idea of the ways that RPM could be applied to improve application performance, efficiency or resiliency, it’ll be time to describe how the features of an RPM service are exposed to an application. That means taking a hard look at network APIs to be sure they export the functions and features that applications will need to access the service. The API is key, because it defines the ways that an application actually accesses a new network service. Keep in mind that such a service may or may not be a natural fit to existing applications; in some cases, it will fit naturally meaning that an existing application can easily begin to utilize the service to improve performance or efficiency. For other applications, more work will be needed to fully exploit the new service.
Notice that the development of the API is being driven from the top down by application requirements. This is a clear break from traditional network design, where the underlying network and its associated API are defined roughly in tandem. Contrast that to the approach being taken with RPM, where the set of desired network characteristics is described in terms of how an application will actually use the network. Interesting!
Armed with a clear sense of how an application might use Remote Persistent Memory and the APIs needed to access it, now’s the time for network architects and protocol designers to deliver enhanced network protocols and semantics that are best able to deliver the features defined by the new network APIs. And it’s time for hardware and software designers to get to work implementing the service and integrating it into server systems.
With all that in mind, here’s the current state of affairs for those who may be interested in participating. SNIA, through its NVM Programming Technical Working Group, has published a public document describing one very important use case for RPM – High Availability. The document describes the requirements that the SNIA NVM Programming Model – first released in December 2013 -- might place on a high-speed network. That document is available online. In keeping with the ‘top-down’ theme, SNIA’s work begins with an examination of the programming models that might leverage a Remote Persistent Memory service, and then explores the resulting impacts on network design. It is being used today to describe enhancements to existing APIs including both the Verbs API and the libfabric API.
In addition, SNIA and the OFA have established a collaboration to explore other use cases, with the idea that those use cases will drive additional API enhancements. That collaboration is just now getting underway and is taking place during open, bi-weekly meetings of the OFA’s OpenFabrics Interfaces Working Group (OFIWG). There is also a mailing list dedicated to the topic to which you can subscribe by going to www.lists.openfabrics.org and subscribing to the Ofa_remotepm mailing list.
And finally, we’ll be discussing the topic at the upcoming Flash Memory Summit, August 7-9, 2018. Just go to theprogramsection and click on the Persistent Memory major topic, and you’ll find a link to PMEM-202-1: Remote Persistent Memory.
See you in Santa Clara!
Persistent Memory (PM) has made tremendous strides since SNIA’s first Non-Volatile Memory Summit in 2013. With a name change to Persistent Memory Summit in 2017, that event continued the buzz with 350+ attendees and a focus turning to applications.
Now in 2018, the agenda for the SNIA Persistent Memory Summit, upcoming January 24 at the Westin San Jose, reflects the integration of PM in a number of organizations. Zvonimir Bandic of Western Digital Corporation will kick off the day exploring the “exabyte challenge” of persistent memory centric architectures and memory fabrics. The fairly new frontier of Persistent Memory over Fabrics (PMoF) returns as a topic with speakers from Open Fabrics Alliance, Cray, Eideticom, and Mellanox. Performance is always evolving, and Micron Technologies, Enmotus, and Calypso Systems will give their perspectives. And the day will dive into futures of media with speakers from Nantero and Spin Transfer Technologies, and a panel led by HPE will review new interfaces and how they relate to PM.
A highlight of the Summit will be a panel on applications and cloud with a PM twist, featuring Dreamworks, GridGain, Oracle, and Aerospike. Leading up to that will be a commentary on file systems and persistent memory from NetApp and Microsoft, and a discussion of virtualization of persistent memory presented by VMware. SNIA found a number of users interested in persistent memory support in both Windows Server 2016 and Linux at recent events, so Microsoft and Linux will update us on the latest developments. Finally, you will want to know where the analysts weigh in on PM, so Coughlin Associates, Evaluator Group, Objective Analysis, and WebFeet Research will add their commentary.
During breaks and the complimentary lunch, you can tour Persistent Memory demos from the SNIA NVDIMM SIG, SMART Modular, AgigA Tech, Netlist, and Viking Technology.
Make your plans to attend this complimentary event by registering here: http://www.snia.org/pm-summit. See you in San Jose!
Guest Columnist: Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA)
Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance. Check it out here.
The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take. The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed. Quite the event!
But why would the OpenFabrics Alliance (the OFA) be offering a presentation at a Persistent Memory (PM) Summit, you ask? Fabrics! Which just happens to be the OFA’s forte.
For several years now, SNIA’s NVM Programming Model Technical Working Group (NVMP TWG) has been describing programming models designed to deliver high availability, the primary thesis for which is simply stated – data isn’t truly ‘highly available’ until it is stored persistently in at least two places. Hence the need to access remote persistent memory via a fabric in a highly efficient, and performant, manner. And that’s where the OFA comes in.
For those unfamiliar with us, the OFA concerns itself with developing open source network software to allow applications to get the most performance possible from the network. Historically, that has meant that the OFA has developed libraries and kernel modules that conform to the Verbs specification as defined in the InfiniBand Architecture specifications. Over time, the suite has expanded to include software for derivative specifications such as RoCE (RDMA over Converged Ethernet) and iWARP (RDMA over TCP/IP). In today’s world, much of the work of maintaining the Verbs API has been assumed by the open source community itself. Success!
Several years ago, the OFA began an effort called the OpenFabrics Interfaces project to define a network API now known as ‘libfabric’. This API complements the Verbs API; Verbs continues into the foreseeable future as the API of choice for verbs-based fabrics such as InfiniBand. The idea was that the libfabric API would be driven mainly by the unique requirements of the consumers of network services. The result would be networking solutions that are transport independent and that meet the needs of application and middleware developers through a freely available open source API.
So, what does all this have to do with persistent memory? A great deal!
By now, we have come to the realization that remote, fabric attached persistent memory, while very much like local memory in many respects, has some unique characteristics. To state the obvious, it has persistence characteristics akin to those found in classical file systems, but it has the potential to be accessed using fast memory semantics instead of conventional file-based POSIX semantics. Accomplishing this implies a need for new features exposed to consumers through the API, giving the consumer greater control over the persistence of data written to the remote memory. Fortunately, the libfabric framework was designed from the outset for flexibility, which should make it straightforward to define the new structures needed to support Persistent Memory accesses over a high performance, switched fabric.
My presentation at the Persistent Memory Summit had two main goals; the first was to introduce the OpenFabrics Alliance’s approach to API development. The second was to begin the discussion of API requirements to support Persistent Memory. For example, during the talk we drew a distinction between ‘Data Storage’ (what it means to access conventional storage) versus ‘Data Access’ (accessing persistent memory over a fabric). As the slides in the presentation make clear, these are two very different use cases, and yet the enhanced libfabric API must support both equally well. At the end of the presentation, we presented some ideas for what a converged I/O stack designed to support both use cases might look like. Naturally, this is just the beginning of the story.
There is much work to do. The work on the libfabric API is now underway in earnest in the OpenFabrics Interfaces Working Group (OFIWG), and as with the original libfabric work, we are beginning with a requirement gathering phase. We want to be sure that the resulting enhancements to the libfabric API meet the needs of applications accessing remote persistent memory.
The OFA is looking forward to their presentation at the next Persistent Memory Summit to be held January 24, 2018 at the Westin in San Jose, CA, where we will provide updates on OFA activities. Details on the Summit can be found here.
Being an open source organization, the OFA welcomes input from all interested parties in our efforts to support the emergence of this exciting new technology. For more information on how to get involved, please go to the OpenFabrics website (https://openfabrics.org) to find information about regular working group meetings and how you can get involved. Or, feel free to reach out to me directly for more information – grun@cray.com
NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
NVMe (the protocol)
As promised during the live event, here are answers to all the questions we received.
Q. Is SRAM still used today?
A. SRAM is still in use today as embedded CACHE (Level 1/2/3) within a CPU and very limited in external standalone packaging... This is due to cost and size/capacity.
Q. Does 3D NAND use multiple voltage levels? Or does each layer use just two voltages?
A. 3D NAND is much like Planar NAND in operation. Supporting all the versions (SLC, MLC, TLC, and future even QLC). Other challenges exist going vertical, but are unrelated to voltage levels being supported
Q. How does Symbolic IO work with the NVDIMM-P?
A. SNIA does not comment on individual companies. Please contact Symbolic IO directly.
Q. When do you see NVMe over Fibre Channel becoming mainstream? Just a "guesstimate"
A. At the time of this writing, FC-NVMe (the standardized form of NVMe over Fabrics using Fibre Channel) is in the final ratification phase and is technically stable. By the time you read this it will likely already be completed. The standard itself is already a mainstream form of NVMe-oF, and has been a part of the NVMe-oF standard since the beginning. Market usage for NVMe-oF will ramp up as vendors, products, and ecosystem developments continue to announce innovations. Different transport mechanisms solve different problems, and the uses for Fibre Channel are not 100% overlapped with Ethernet or Fibre Channel. Having said that, it would not be surprising that both FC and Ethernet-based NVMe-oF grew at a somewhat similar pace for the next couple of years.
Q. How are networked NVMe SSDs addressed?
A. Each NVMe-oF transport layer has an addressing scheme that is used for discovery. NVMe SSDs actually connect to the Fabric transport through a port connected with the NVMe controller. A thorough description of how this works can be found at the SNIA ESF webcast: "Under the Hood with NVMe over Fabrics." You can also check out the Q&A blog from that webcast.
Q. NVMe has any specific connectors like SATA or SAS would do?
A. When looking at the physical drive connector, the industry came up with an edge connector called "U.2" that supports NVMe, SAS and SATA drives. However, the backplane in the host system must be connected correctly
Q. Other than a real-estate savings, what advantage does the 3D NAND offer? Speed?
A. 3D NAND brings to us the added space used for the floating gate. When we get down to 20nm and 16nm (the measured width of the that floating gate) it only allows a few electrons, yes actual electrons, to separate the states. With 3D NAND we have room grow the gate, allowing more electrons per level and gaining us the ability to have things like TLC and beyond a reality.
Don't forget, you can check out the recorded version of the webcast at your convenience and you can download the webcasts slides as well if you'd like to follow along. Remember, this webcast was part of series. I encourage you to register today for our next one, which will be on September 28, 2017 at 10:00 am PT – Part Cyan – Storage Management. And please visit the SNIA ESF website for our full library of ESF webcasts.
Containers and persistent memory are both very hot topics these days. Containers are making it easier for developers to know that their software will run, no matter where it is deployed and no matter what the underlying OS is as both Linux and Windows are now fully supported. Persistent memory, a revolutionary data storage technology used in 3d printing london, will boost the performance of next-generation packaging of applications and libraries into containers. On July 27th, SNIA is hosting a live webcast “Containers and Persistent Memory.”
In this webcast you’ll learn:
What SNIA is doing to advance persistent memory technologies
What the ecosystem enablement efforts are around persistent memory solutions and their relationship to containerized applications
How NVDIMMs are paving the way for plug-n-play adoption into containers environments for applications demanding extreme performance
How next-generation applications (often referred to as cloud-native or web-scale) can take advantage of both NVDIMMs and Containers to achieve both high performance and hyperscale
I hope you will join me, together with my colleagues Arthur Sainio, SNIA NVDIMM SIG Co-chair, and Alex McDonald, Co-chair of SNIA Solid State Storage and SNIA Cloud Storage Initiatives, to find out what application developers, storage administrators and the industry want to see to fully unlock the potential of persistent memory in a containerized environment. I encourage you to register today. And please bring your questions. We’ll be on-hand to answer them on the spot. I hope to see you there.
By now, we at the SNIA Storage Ethernet Storage Forum (ESF) hope you are familiar with (perhaps even a loyal fan of) the "Everything You Wanted To Know About Storage But Were Too Proud To Ask," popular webcast series. On August 1st, the "Too Proud to Ask" train will make another stop. In this seventh session, "Everything You Wanted to Know About Storage But Were Too Proud To Ask: Turquoise - Where Does My Data Go?, we will take a look into the mysticism and magic of what happens when you send your data off into the wilderness. Once you click "save," for example, where does it actually go?
When we start to dig deeper beyond the application layer, we often don't understand what happens behind the scenes. It's important to understand multiple aspects of the type of storage our data goes to along with their associated benefits and drawbacks as well as some of the protocols used to transport it.
In this webcast we will explain:
Volatile v Non-Volatile v Persistent Memory
NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
NVMe (the protocol)
Many people get nervous when they see that many acronyms, but all too often they come up in conversation, and you're expected to know all of them? Worse, you're expected to know the differences between them, and the consequences of using them? Even worse, you're expected to know what happens when you use the wrong one?
We're here to help.
It's an ambitious project, but these terms and concepts are at the heart of where compute, networking and storage intersect. Having a good grasp of these concepts ties in with which type of storage networking to use, and how data is actually stored behind the scenes.
Register today to join us for this edition of the "Too Proud To Ask" series, as we work towards making you feel more comfortable in the strange, mystical world of storage. And don't let pride get in the way of asking any and all questions on this great topic. We will be there on August 1st to answer them!
Update: If you missed the live event, it's now available on-demand. You can also download the webcast slides.
Leave a Reply