Achieving 10-Million IOPS from a single VM on Windows Hyper-V

Submitted by Anonymous (not verified) on

Many server workloads, for example OLTP database workloads, require high I/O throughput and low latency. With the industry trend of moving high-end scale-up workloads to virtualization environment, it is essential for cloud providers and on-premises servers to achieve near native performance by reducing I/O virtualization overhead which mainly comes from two sources: DMA operations and interrupt delivery mechanism for I/O completions. The direct PCIe NVMe device assign techniques allow a VM to interact with HW devices directly and avoid using traditional Hyper-V para-virtualized I/O path.

Accelerating Storage with RDMA

Submitted by Anonymous (not verified) on

File, Block and Object storage is able to take advantage of current NAND flash to get better performance. But much more performance is possible as RDMA based storage technology originally developed for the HPC industry moves to the main stream. By enhancing a storage system’s network stack with RDMA users can see an even more dramatic improvement than by just adding flash to their storage. The technology increases the performance of the entire storage system allowing File, Block and Object based applications to take more advantage of much higher performance solid state storage.

Accelerating Storage with NVM Express SSDs and P2PDMA

Submitted by Anonymous (not verified) on

PCIe devices continue to get faster and more powerful. At this point building systems where all DMA traffic must pass through the system memory of the host CPU is becoming problematic. For this reason, there has been considerable work done on both hardware standardization and software frameworks to enable Peer-2-Peer (P2P) DMAs between PCIe End Points (EPs). In this talk we will present an update on the P2PDMA ecosystem and include performance results gathered from systems that have been designed to utilize P2PDMA.

Accelerating OLTP performance with NVMe SSDs

Submitted by Anonymous (not verified) on

We compare multiple SSDs performance when running OLTP applications on both MySQL Server and Percona Server. We discuss important configuration tuning to allow the Server to benefit from faster storage, and present results using an implementation of the TPC-C standard. We show a paradigm shift, where a typical OLTP workload running on HDDs is I/O bound, but that by replacing storage with NVMe SSDs, that same workload on the otherwise same server may yield 2 orders of magnitude more throughput and furthermore become CPU bound.

Learning Objectives

A large Cluster Architecture with Efficient Caching Coherency, Intelligent Management, and High Performance for a Low-Cost Storage Node

Submitted by Anonymous (not verified) on

Using a data cache coherency model that employs the concept of logical unit ownership within a cluster of storage nodes allows for optimization of performance for ultra-low latency, even on low-cost storage hardware lacking a high-speed interconnect between nodes. This is accomplished by limiting optimal I/O access to any one logical unit to a single storage node with a local high-speed data cache. However, this also implies use of Asymmetric Logical Unit Access (ALUA).

A Cost Effective, High Performance, Highly Scalable, Non-RDMA NVMe Fabric

Submitted by Anonymous (not verified) on

Large server count, scale out cluster applications require non-volatile storage performance well beyond the capabilities of legacy storage networking technologies. Until now the only solution has been to load SSDs directly into the cluster servers. This approach delivers excellent raw storage performance, but introduces many disadvantages including: single points of failure, severely limited configuration/provisioning flexibility and added solution cost.

5 Ways to Convince Your CEO It’s Time for a Storage Refresh

Submitted by Anonymous (not verified) on

Storage has historically stayed in its “box.” Any developments in the industry were contained to limited dimensions such as costs, feeds or speeds. But in today’s digital world, it’s not enough to just deliver data – all parts of your infrastructure must be able to answer ever-present data security questions, extract valuable insights, identify sensitive information and help protect critical data from incoming threats.

NVDIMM Panel

Submitted by Anonymous (not verified) on

The IT industry has made tremendous progress innovating up and down the computing stack to enable, and take advantage of, non-volatile memory (NVM). New media types are joining NAND Flash, enhanced controllers and networking are being developed to unlock the latency and throughput advantages of NVM, CPU architectures are evolving, and OSes are being enhance. This is all necessary innovation to realize the full potential of NVM. But is it sufficient? Where are the weakest links to fully unlock the potential of NVM?

Bridging the Gap Between NVMe SSD Performance and Scale Out Software

Submitted by Anonymous (not verified) on

NVMe SSDs are becoming increasingly popular choice in scale out storage for latency sensitive workloads like databases, real time analytics, video streaming. NVMe SSDs provide significant performance throughput and lower latency compared to SATA, SAS SSDs. It is not unrealistic to expect these devices providing close to million random IOs per second. However scale out software stacks have significant amount of software overhead limiting the immense potential of NVMe SSDs.

Breaking the limitations of captive NVMe storage – 18M IOPs in 2u

Submitted by Anonymous (not verified) on

NVMe is quickly replacing SATA and SAS as the native interface of choice for direct attached server storage. This presentation will discuss a new, innovative storage networking solution for the next generation NVM technologies. The talk will provide an overview of a highly scalable, Ethernet based storage networking architecture designed for next generation NVM SSD’s with latencies of

Learning Objectives

Subscribe to Physical Storage