Next generation of ecosystem storage management

Submitted by Anonymous (not verified) on

With the current hyperscale datacenters, managing multi-vendor storage hardware using one simple user friendly tool is the datacenter admins desire. Server and storage Industries are trying to solve this common problem by providing a standard way of storage management. DMTF and SNIA have attempted to standardize the storage management using CIM and SMI-S standards for a decade. Now DMTF and SNIA have reviewed the lessons we learnt in a decade and have come up with Redfish and Swordfish. A simplified and easy to implement and use standards for the next generation of storage management.

Next Generation Data Center: Hyperconverged Architectures Impact On Storage

Submitted by Anonymous (not verified) on

A modern data center typically contains a number of specialized storage systems which provide centralized storage for a large collection of data center applications. These specialized systems were designed and implemented as a solution to the problems of scalable storage, 24x7 data access, centralized data protection, centralized disaster protection strategies, and more.

New Fresh Storage Approach for New IT Challenges

Submitted by Anonymous (not verified) on

With a design started in 2006, OpenIO is a new flavor among the dynamic object storage market segment. Beyond Ceph and OpenStack Swift, OpenIO is the last coming player in that space. The product relies on an open source core object storage software with several object APIs, file sharing protocols and applications extensions. The inventors of the solution took a radical new approach to address large scale environment challenges. Among them, the product avoids any rebalance like consistent hashing based systems always trigger.

Providing Efficient Storage Operations for Both Data Centers and Hyperscale Applications

Submitted by Anonymous (not verified) on

The industry needs a protocol that provides efficient storage operations for both data centers and hyperscale applications. Data centers are willing to pay more for more complex interfaces that provide reliable data response times, while Hyperscalers want the lowest cost and greatest flexibility to optimize their cost/performance ratios. Two paths are under consideration to provide deterministic read latency. One is I/O determinism in which the host controls the timing of reads vs. writes to storage elements specified by the controller.

Preparing your storage for handling even more capacity - again

Submitted by Anonymous (not verified) on

While capacity planning and management is an old problem, there are several new challenges to be addressed that require significant enhancements to the storage controller. One example is the increasing amount of storage in relation to the available amount of RAM. This session will cover several challenges and solutions that will help keep your controller prepared for tomorrow’s capacity.

Persistent Memory over Fabrics – an Application-centered View

Submitted by Anonymous (not verified) on

This session begins with a short discussion between SNIA and the OpenFabrics Alliance exploring the relationship between the two organizations as viewed from the point of view of the consumer of fabric-attached persistent memory services. The session will touch on the role of SNIA’s NVM Programming Model TWG in defining the set of services exposed to the consumer; the OFA section of the talk will explore some thoughts about possible directions for network APIs to deliver those services to the consumer.

Persistent Memory Media

Submitted by Anonymous (not verified) on

Persistent Memory technology is continuing to develop. There are today a number of ways of providing memory which is both byte addressable and persistent. This panel of technologists will discuss several promising emerging media technologies. They will explore how new applications like artificial intelligence and machine learning are demanding higher density media combined with at lower power and lower cost and what new memory technologies being implemented today, and forthcoming products, will address the cost/performance balance.

Performance Analysis of the Peer Fusion File System (PFFS)

Submitted by Anonymous (not verified) on

The PFFS is a POSIX compliant parallel file system capable of high resiliency and scalability. The user data is dispersed across a cluster of peers with no replication. This is an analysis of the performance metrics obtained as we ramped up the count of peers in the cluster. For each cluster configuration we adjusted the allowable count of peer failures (FEC - Forward Error Correction) from 14% to 77% of the cluster and measured the I/O performance of the cluster. Write operations consistently exceeded 700MB/s even with 77% of the peers faulted.

Parallelizing a Distributed Testing Environment

Submitted by Anonymous (not verified) on

Insuring software correctness is important in all development environments, but it is critical when developing systems that store mission-critical data. A common bottleneck in the development cycle is the turn-around time for automated regression tests. Yet as products mature, lines of code increase, and features are added, the complexity and number of tests required tends to grow dramatically.

Subscribe to Storage Management