Author:

Marty Foltyn

Company : Channel Chargers, LLC

Title : Vice President

 
 
author

Exceptional Agenda on Tap for 2018 Persistent Memory Summit

Marty Foltyn

Jan 18, 2018

title of post
Persistent Memory (PM) has made tremendous strides since SNIA’s first Non-Volatile Memory Summit in 2013. With a name change to Persistent Memory Summit in 2017, that event continued the buzz with 350+ attendees and a focus turning to applications. Now in 2018, the agenda for the SNIA Persistent Memory Summit, upcoming January 24 at the Westin San Jose, reflects the integration of PM in a number of organizations. Zvonimir Bandic of Western Digital Corporation will kick off the day exploring the “exabyte challenge” of persistent memory centric architectures and memory fabrics. The fairly new frontier of Persistent Memory over Fabrics (PMoF) returns as a topic with speakers from Open Fabrics Alliance, Cray, Eideticom, and Mellanox. Performance is always evolving, and Micron Technologies, Enmotus, and Calypso Systems will give their perspectives. And the day will dive into futures of media with speakers from Nantero and Spin Transfer Technologies, and a panel led by HPE will review new interfaces and how they relate to PM. A highlight of the Summit will be a panel on applications and cloud with a PM twist, featuring Dreamworks, GridGain, Oracle, and Aerospike. Leading up to that will be a commentary on file systems and persistent memory from NetApp and Microsoft, and a discussion of virtualization of persistent memory presented by VMware.   SNIA found a number of users interested in persistent memory support in both Windows Server 2016 and Linux at recent events, so Microsoft and Linux will update us on the latest developments. Finally, you will want to know where the analysts weigh in on PM, so Coughlin Associates, Evaluator Group, Objective Analysis, and WebFeet Research will add their commentary. During breaks and the complimentary lunch, you can tour Persistent Memory demos from the SNIA NVDIMM SIG, SMART Modular, AgigA Tech, Netlist, and Viking Technology. Make your plans to attend this complimentary event by registering here: http://www.snia.org/pm-summit. See you in San Jose!

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The OpenFabrics Alliance and the Pursuit of Efficient Access to Persistent Memory over Fabrics

Marty Foltyn

Nov 13, 2017

title of post
  Guest Columnist:  Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA) Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance.  Check it out here. The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take.  The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed.  Quite the event! But why would the OpenFabrics Alliance (the OFA) be offering a presentation at a Persistent Memory (PM) Summit, you ask?  Fabrics!  Which just happens to be the OFA’s forte. For several years now, SNIA’s NVM Programming Model Technical Working Group (NVMP TWG) has been describing programming models designed to deliver high availability, the primary thesis for which is simply stated – data isn’t truly ‘highly available’ until it is stored persistently in at least two places.  Hence the need to access remote persistent memory via a fabric in a highly efficient, and performant, manner.  And that’s where the OFA comes in. For those unfamiliar with us, the OFA concerns itself with developing open source network software to allow applications to get the most performance possible from the network.  Historically, that has meant that the OFA has developed libraries and kernel modules that conform to the Verbs specification as defined in the InfiniBand Architecture specifications.  Over time, the suite has expanded to include software for derivative specifications such as RoCE (RDMA over Converged Ethernet) and iWARP (RDMA over TCP/IP).  In today’s world, much of the work of maintaining the Verbs API has been assumed by the open source community itself.  Success! Several years ago, the OFA began an effort called the OpenFabrics Interfaces project to define a network API now known as ‘libfabric’.  This API complements the Verbs API; Verbs continues into the foreseeable future as the API of choice for verbs-based fabrics such as InfiniBand.  The idea was that the libfabric API  would be driven mainly by the unique requirements of the consumers of network services.  The result would be networking solutions that are transport independent and that meet the needs of application and middleware developers through a freely available open source API. So, what does all this have to do with persistent memory?  A great deal! By now, we have come to the realization that remote, fabric attached persistent memory, while very much like local memory in many respects, has some unique characteristics.  To state the obvious, it has persistence characteristics akin to those found in classical file systems, but it has the potential to be accessed using fast memory semantics instead of conventional file-based POSIX semantics.  Accomplishing this implies a need for new features exposed to consumers through the API, giving the consumer greater control over the persistence of data written to the remote memory.  Fortunately, the libfabric framework was designed from the outset for flexibility, which should make it straightforward to define the new structures needed to support Persistent Memory accesses over a high performance, switched fabric. My presentation at the Persistent Memory Summit had two main goals; the first was to introduce the OpenFabrics Alliance’s approach to API development.  The second was to begin the discussion of API requirements to support Persistent Memory.  For example, during the talk we drew a distinction between ‘Data Storage’ (what it means to access conventional storage) versus ‘Data Access’ (accessing persistent memory over a fabric).  As the slides in the presentation make clear, these are two very different use cases, and yet the enhanced libfabric API must support both equally well.  At the end of the presentation, we presented some ideas for what a converged I/O stack designed to support both use cases might look like.  Naturally, this is just the beginning of the story. There is much work to do. The work on the libfabric API is now underway in earnest in the OpenFabrics Interfaces Working Group (OFIWG), and as with the original libfabric work, we are beginning with a requirement gathering phase.  We want to be sure that the resulting enhancements to the libfabric API meet the needs of applications accessing remote persistent memory. The OFA is looking forward to their presentation at the next Persistent Memory Summit to be held January 24, 2018 at the Westin in San Jose, CA, where we will provide updates on OFA activities.  Details on the Summit can be found here. Being an open source organization, the OFA welcomes input from all interested parties in our efforts to support the emergence of this exciting new technology.    For more information on how to get involved, please go to the OpenFabrics website (https://openfabrics.org) to find information about regular working group meetings and how you can get involved.  Or, feel free to reach out to me directly for more information – grun@cray.com

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

The OpenFabrics Alliance and the Pursuit of Efficient Access to Persistent Memory over Fabrics

Marty Foltyn

Nov 13, 2017

title of post
  Guest Columnist:  Paul Grun, Advanced Technology Development, Cray, Inc. and Vice-Chair, Open Fabrics Alliance (OFA) Earlier this year, SNIA hosted its one-day Persistent Memory Summit in San Jose; it was my pleasure to be invited to participate by delivering a presentation on behalf of the OpenFabrics Alliance.  Check it out here. The day long Summit program was chock full of deeply technical, detailed information about the state of the art in persistent memory technology coupled with previews of some possible future directions this exciting technology could conceivably take.  The Summit played to a completely packed house, including an auxiliary room equipped with a remote video feed.  Quite the event! But why would the OpenFabrics Alliance (the OFA) be offering a presentation at a Persistent Memory (PM) Summit, you ask?  Fabrics!  Which just happens to be the OFA’s forte. For several years now, SNIA’s NVM Programming Model Technical Working Group (NVMP TWG) has been describing programming models designed to deliver high availability, the primary thesis for which is simply stated – data isn’t truly ‘highly available’ until it is stored persistently in at least two places.  Hence the need to access remote persistent memory via a fabric in a highly efficient, and performant, manner.  And that’s where the OFA comes in. For those unfamiliar with us, the OFA concerns itself with developing open source network software to allow applications to get the most performance possible from the network.  Historically, that has meant that the OFA has developed libraries and kernel modules that conform to the Verbs specification as defined in the InfiniBand Architecture specifications.  Over time, the suite has expanded to include software for derivative specifications such as RoCE (RDMA over Converged Ethernet) and iWARP (RDMA over TCP/IP).  In today’s world, much of the work of maintaining the Verbs API has been assumed by the open source community itself.  Success! Several years ago, the OFA began an effort called the OpenFabrics Interfaces project to define a network API now known as ‘libfabric’.  This API complements the Verbs API; Verbs continues into the foreseeable future as the API of choice for verbs-based fabrics such as InfiniBand.  The idea was that the libfabric API  would be driven mainly by the unique requirements of the consumers of network services.  The result would be networking solutions that are transport independent and that meet the needs of application and middleware developers through a freely available open source API. So, what does all this have to do with persistent memory?  A great deal! By now, we have come to the realization that remote, fabric attached persistent memory, while very much like local memory in many respects, has some unique characteristics.  To state the obvious, it has persistence characteristics akin to those found in classical file systems, but it has the potential to be accessed using fast memory semantics instead of conventional file-based POSIX semantics.  Accomplishing this implies a need for new features exposed to consumers through the API, giving the consumer greater control over the persistence of data written to the remote memory.  Fortunately, the libfabric framework was designed from the outset for flexibility, which should make it straightforward to define the new structures needed to support Persistent Memory accesses over a high performance, switched fabric. My presentation at the Persistent Memory Summit had two main goals; the first was to introduce the OpenFabrics Alliance’s approach to API development.  The second was to begin the discussion of API requirements to support Persistent Memory.  For example, during the talk we drew a distinction between ‘Data Storage’ (what it means to access conventional storage) versus ‘Data Access’ (accessing persistent memory over a fabric).  As the slides in the presentation make clear, these are two very different use cases, and yet the enhanced libfabric API must support both equally well.  At the end of the presentation, we presented some ideas for what a converged I/O stack designed to support both use cases might look like.  Naturally, this is just the beginning of the story. There is much work to do. The work on the libfabric API is now underway in earnest in the OpenFabrics Interfaces Working Group (OFIWG), and as with the original libfabric work, we are beginning with a requirement gathering phase.  We want to be sure that the resulting enhancements to the libfabric API meet the needs of applications accessing remote persistent memory. The OFA is looking forward to their presentation at the next Persistent Memory Summit to be held January 24, 2018 at the Westin in San Jose, CA, where we will provide updates on OFA activities.  Details on the Summit can be found here. Being an open source organization, the OFA welcomes input from all interested parties in our efforts to support the emergence of this exciting new technology.    For more information on how to get involved, please go to the OpenFabrics website (https://openfabrics.org) to find information about regular working group meetings and how you can get involved.  Or, feel free to reach out to me directly for more information – grun@cray.com

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Around the World, It’s a Persistent Memory Summer

Marty Foltyn

Jun 19, 2017

title of post
This summer, join SNIA as they evangelize members’ industry activity to advance the convergence of storage and memory. SNIA is participating in the first annual European In-Memory Computing Summit, June 20-21, 2017 at the Movenpick Hotel in Amsterdam.  SNIA Europe Vice-Chair and SNIA Solid State Storage Initiative (SSSI) Co-Chair Alex McDonald of NetApp keynotes a session on SNIA and Persistent Memory, highlighting SNIA work on an NVM programming model and persistent memory solutions available today and SNIA is a sponsor in the exhibit hall. Alex’s presentation and SNIA’s booth presence is just one of SNIAs many outreach and education activities on persistent memory taking place this summer. Rob Peglar, SNIA Board of Directors member, was a highlight of Storage Field Day earlier this month, engaging with tech’s leading bloggers on persistent memory advances.  Watch the day’s video on-demand. SNIA’s NVDIMM Special Interest Group exhibited at the JEDEC Server Forum, presenting an application demonstration using multiple member companies’ JEDEC-compliant NVDIMM-Ns.   Eden Kim, chair of SNIA’s Solid State Storage Technical Work Group, speaks later this week at the China Flash Summit on SNIA’s work in persistent memory and solid state storage performance. In August, SNIA will have a major presence at Flash Memory Summit, with a dedicated persistent memory conference track, an NVDIMM Forum, and a persistent memory demonstration area.  Stay tuned for all the details coming in July. Finally, SNIA will continue an interest in containers and persistent memory with a SNIA BrightTalk webcast July 27 at 10:00 am PT/1:00 pm ET. Registration is now open to join SNIA experts Arthur Sainio, SNIA NVDIMM SIG Co-Chair, Chad Thibodeau, SNIA Cloud Storage member, and Alex McDonald, Co-Chair of SNIA Solid State Storage and SNIA Cloud Storage Initiatives to find out what customers, storage developers, and the industry want to see to fully unlock the potential of persistent memory in a container environment.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Around the World, It’s a Persistent Memory Summer

Marty Foltyn

Jun 19, 2017

title of post
This summer, join SNIA as they evangelize members’ industry activity to advance the convergence of storage and memory. SNIA is participating in the first annual European In-Memory Computing Summit, June 20-21, 2017 at the Movenpick Hotel in Amsterdam.  SNIA Europe Vice-Chair and SNIA Solid State Storage Initiative (SSSI) Co-Chair Alex McDonald of NetApp keynotes a session on SNIA and Persistent Memory, highlighting SNIA work on an NVM programming model and persistent memory solutions available today and SNIA is a sponsor in the exhibit hall. Alex’s presentation and SNIA’s booth presence is just one of SNIAs many outreach and education activities on persistent memory taking place this summer. Rob Peglar, SNIA Board of Directors member, was a highlight of Storage Field Day earlier this month, engaging with tech’s leading bloggers on persistent memory advances.  Watch the day’s video on-demand. SNIA’s NVDIMM Special Interest Group exhibited at the JEDEC Server Forum, presenting an application demonstration using multiple member companies’ JEDEC-compliant NVDIMM-Ns.   Eden Kim, chair of SNIA’s Solid State Storage Technical Work Group, speaks later this week at the China Flash Summit on SNIA’s work in persistent memory and solid state storage performance. In August, SNIA will have a major presence at Flash Memory Summit, with a dedicated persistent memory conference track, an NVDIMM Forum, and a persistent memory demonstration area.  Stay tuned for all the details coming in July. Finally, SNIA will continue an interest in containers and persistent memory with a SNIA BrightTalk webcast July 27 at 10:00 am PT/1:00 pm ET. Registration is now open to join SNIA experts Arthur Sainio, SNIA NVDIMM SIG Co-Chair, Chad Thibodeau, SNIA Cloud Storage member, and Alex McDonald, Co-Chair of SNIA Solid State Storage and SNIA Cloud Storage Initiatives to find out what customers, storage developers, and the industry want to see to fully unlock the potential of persistent memory in a container environment.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Your Questions Answered on Non-Volatile DIMMs

Marty Foltyn

Apr 3, 2017

title of post
  by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are Here!  You can view the webcast on demand. Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology. Have a question?  Send it to nvdimmsigchair@snia.org. 1. What about 3DXpoint, how will this technology impact the market? 3DXPoint DIMMs will likely have a significant impact on the market. They are fast enough to use as a slower tier of memory between NAND and DRAM.  It is still too early to tell though. 2. What are good benchmark tools for DAX and what are the differences between NVML applications and DAX aware applications? For benchmark tools, please see the answer for (11). NVML applications are written specifically for NVM (Non-Volatile Memory). They may use the open source NVML libraries (http://pmem.io/nvml) for their usage. DAX is a File System feature that avoids the usage of Page Cache buffers.  DAX aware applications are aware that the writes and reads would go directly to the underlying NVM without being cached. 3. On the slide talking about NUMA, there was a mention accessing NVDIMMs from a CPU on a different memory bus. The part about larger access times was clear enough. However, I came away with the impression that there is a correctness issue with handling of ADR signal as well. Please clarify. If this question is asking whether the NUMA remote CPU will successfully flush ADR-protected buffers to memory connected to the NUMA near CPU then yes there is the potential for a problem in this area. However ADR is an Intel feature that is not specified in the JEDEC NVDIMM standard, so this is an Intel specific implementation question. The question needs to be posed to Intel. 4. How common is NVDIMM compatible BIOS? How would one check? They are becoming more common all the time. There are at least 8 server/storage systems from Intel and 22 from Supermicro that support NVDIMMs.  Several other motherboard vendors have systems that support NVDIMMs.  Most of the NVDIMM vendors have the lists posted on their websites. 5. How does a system go in to save? How what exactly does the BIOS have to do to get a system before asserting save? The BIOS does the initial checking of making sure the NVDIMM has backup supply on power loss, before it ARMs it. Also, the BIOS makes sure that any RESTORE of the previously saved data is properly done. This involves a set of operations by setting appropriate registers in the NVDIMM module – all that happens during the boot up initialization. On A/C Power Loss, the PCH (Platform Control Hub) detects the condition and initiates what is called the ADR (Asynchronous DRAM Refresh) sequence, terminating in the assertion of SAVE signal by the CPLD. Without the BIOS ARM-ing the NVDIMM module, the NVDIMM module will not respond to the SAVE signal on power loss situation. 6. Could you paint the picture of hardware costs at this point? How soon will NVDIMM-enabled systems be able to become “the rest of us”? The NVDIMM use DRAM, NAND Flash, a controller and well as many other parts in addition to what are used on standard RDIMMs. On that basis the cost of NVDIMM-N is higher that standard RDIMMs.  NVDIMM-enabled systems have been available for several years and are shipping now. 7. Does RHEL 7.3 easily support Linux Kernel 4.4? RHEL 7.3 is still using the 3.10 version of the Linux Kernel. For RHEL related information, please, check with Red Hat. You can also refer to: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html The distribution has drivers to support the persistent memory. They have also packaged the libraries for the persistent memory. 8. What are the usual sizes for NVDIMMs available today? 4GB, 8GB, 16GB, 32GB 9. Are there any case studies of each of the NVDIMM-N applications mentioned? You can find some examples of case studies at these websites:  https://channel9.msdn.com/events/build/2016/p466 and https://msdn.com/events/build/2016/p470 10. What is the difference between pmem lib/pmfs in Linux and an DAX enabled files system (like ext-DAX)? A DAX based File System avoids the usage of Kernel Page Cache Layer for caching its write data. This would make all its write operations go directly to the underlying storage unit. One important thing to understand is, a DAX File System can still use BLOCK DRIVERS for accessing its underlying storage. PMFS is a File System that is optimized to use Persistent Memory, by completely avoiding the Page Cache and the Block Drivers. It is designed to provide efficient access to Persistent Memory that would be directly accessible via CPU load/store instructions. Refer to this link: https://github.com/linux-pmfs/pmfs for more details. PMFS, as of now is only in experimental stages. 11. What tool is used to measure the performance? The performance measurement depends on what kind of Application workload is to be characterized. This is a very complex topic. No single benchmarking tool is good for all the workload characteristics. For File System performance, SpecFS, Bonnie++, IOZone, FFSB, FileBench etc., are good tools. SysBench is good for a variety of performance measurements. Phoronix Test Suite (http://www.phoronix.com/scan.php?page=home) has a variety of tools for Linux based performance measurements. 12. How similar do you expect the OS support for P to be to this support for –N? I don’t see a lot of need for differences at this level (though there certainly will be differences in the BIOS). As of now, the open source libraries (http://pmem.io) are designed to be agnostic about the underlying memory types. They are simply classified as Persistent Memory, meaning, it could be “-N” or “-P” or something else. The libraries are written for User Space, and they assume that any underlying Kernel support should be transparent. The “-P” type has been thought of supporting both the DRAM and the PERSISTENT access at the same time. This might need a separate set of drivers in the Kernel.   13.  Does the PM-based file system appear to be block addressable from the Application? A File System creates a layer of virtualization to support the logical entities such as VOLUMES, DIRECTORIES and FILES. Typically, an Application that is running in the User Space has no knowledge of the underlying mechanisms used by a File System for accessing its storage units such as the Persistent Memory. The access provided by a File System to an Application is typically a POSIX File System interface such as open, close, read, write, seek, etc.,  14. Is ADR a pin? ADR stands for Asynchronous DRAM Refresh. ADR is a feature supported on Intel chipsets that triggers a hardware interrupt to the memory controller which will flush the write-protected data buffers and place the DRAM in self-refresh. This process is critical during a power loss event or system crash to ensure the data is in a “safe” state when the NVDIMM takes control of the DRAM to backup to Flash. Note that ADR does not flush the processor cache. In order to do so, an NMI routine would need to be executed prior to ADR.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS? Users Share Their 2017 Storage Performance Needs

Marty Foltyn

Mar 24, 2017

title of post
New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance.The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing. Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:
  • Does a certain application really need the performance of an SSD?
  • How much should a performance SSD cost?
  • What have other IT managers found to be the right balance of performance and cost?
Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

How Many IOPS? Users Share Their 2017 Storage Performance Needs

Marty Foltyn

Mar 24, 2017

title of post
New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance. The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing.  Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need. You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:
  • Does a certain application really need the performance of an SSD?
  • How much should a performance SSD cost?
  • What have other IT managers found to be the right balance of performance and cost?
Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cast Your Vote on November 8 for the Magic and Mystery of In-Memory Apps!

Marty Foltyn

Nov 2, 2016

title of post
It's an easy "Yes" vote for this great webcast from the SNIA Solid State Storage Initiative on the Magic and Mystery of In-Memory Apps! Join us on Election Day - November 8 - at 1:00 pm ET/10:00 am PT to learn about today's market and the disruptions that happen when combining big-data vote-yes(Petabytes) with in-memory/real-time requirements.  You'll understand the interactions with Hadoop/Spark, Tachyon, SAP HANA, NoSQL, and the related infrastructure of DRAM, NAND, 3DXpoint, NV-DIMMs, and high-speed networking and learn what happens to infrastructure design and operations when "tiered-memory" replaces "tiered storage". Presenter Shaun Walsh of G2M Communications is an expert in memory technology - and a great speaker! He'll share with you what you need to know about evaluating, planning, and implementing in-memory computing applications, and give you the framework to evaluation and plan for your adoption of in-memory computing. Register at: https://www.brighttalk.com/webcast/663/230103

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Cast Your Vote on November 8 for the Magic and Mystery of In-Memory Apps!

Marty Foltyn

Nov 2, 2016

title of post

It’s an easy “Yes” vote for this great webcast from the SNIA Solid State Storage Initiative on the Magic and Mystery of In-Memory Apps! Join us on Election Day – November 8 – at 1:00 pm ET/10:00 am PT to learn about today’s market and the disruptions that happen when combining big-data vote-yes(Petabytes) with in-memory/real-time requirements.  You’ll understand the interactions with Hadoop/Spark, Tachyon, SAP HANA, NoSQL, and the related infrastructure of DRAM, NAND, 3DXpoint, NV-DIMMs, and high-speed networking and learn what happens to infrastructure design and operations when “tiered-memory” replaces “tiered storage”.

Presenter Shaun Walsh of G2M Communications is an expert in memory technology – and a great speaker! He’ll share with you what you need to know about evaluating, planning, and implementing in-memory computing applications, and give you the framework to evaluation and plan for your adoption of in-memory computing.

Register at: https://www.brighttalk.com/webcast/663/230103

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Marty Foltyn