Digital storage is a critical technology for professional Media and Entertainment (M&E). With the Covid-19 pandemic much M&E work went remote, enabled by cloud based services and private and public cloud storage. NVMe SSDs and emerging memories are assuming increased use in high resolution, high frame rate, high dynamic range video content workflows. Between 2019 and 2025, about a 3X increase is expected in the required storage capacity in the industry and a 3.4X increase in storage capacity shipped per year. Cloud storage capacity for the M&E industry will increase 13X between 2019 and 2025.
This webinar looks at the trends driving demand for digital storage in all parts of the M&E industry, with data from the 2020 Digital Storage in Media and Entertainment report from Coughlin Associates presented by Tom Coughlin, who also serves as the volunteer Education Chair for the SNIA Compute, Memory, and Storage Initiative.

The Cloud Data Management Interface (CDMI™) International Standard is intended for application developers who are implementing cloud storage systems, and who are developing applications to manage and consume cloud storage. It documents how to access cloud storage namespaces and how to manage the data stored in these namespaces. In this webcast we’ll provide an overview of the CDMI standard and cover CDMI 2.0:
- Support for encrypted objects
- Delegated access control
- General clarifications
- Errata contributed by vendors implementing the CDMI standard

There is a new wave of cognitive services based on video and image analytics, leveraging the latest in machine learning and deep learning. In this webcast, we will look at some of the benefits and factors driving this adoption, as well as explore compelling projects and required components for a successful video-based cognitive service. This includes some great work in the open source community to provide methods and frameworks, some standards that are being worked on to unify the ecosystem and allow interoperability with models and architectures. Finally, we’ll cover the data required to train such models, the data source and how it needs to be treated.

In modern analytics deployments, latency is the fatal flaw that limits the efficacy of the overall system. Solutions move at the speed of decision, and microseconds could mean the difference between success and failure against competitive offerings. Artificial Intelligence, Machine Learning, and In-Memory Analytics solutions have significantly reduced latency, but the sheer volume of data and its potential broad distribution across the globe prevents a single analytics node from efficiently harvesting and processing data.
This panel discussion will feature industry experts discussing the different approaches to distributed analytics in the network and storage nodes.

Organizations inevitably store multiple copies of the same data. Users and applications store the same files over and over, intentionally or inadvertently. Developers, testers and analysts keep many similar copies of the same data. And backup programs copy the same or similar files daily, often to multiple locations or storage devices. It’s not unusual to end up with some data replicated thousands of times.
So how do we stop the duplication madness? Join this webcast where we’ll discuss how to reduce the number of copies of data that get stored, mirrored, or backed up.

Whether traveling by car, plane or train, it is critical to get from here to there safely and securely. Just like you, your data must be safe and sound as it makes its journey across an internal network or to an external cloud storage device. In this webcast, we'll cover what the threats are to your data as it's transmitted, how attackers can interfere with data along its journey, and methods of putting effective protection measures in place for data in transit.

The broad adoption of 5G, Internet of things (IoT) and edge computing will reshape the nature and role of enterprise and cloud storage over the next several years. What building blocks, capabilities and integration methods are needed to make this happen?
Join this webcast for a discussion on:
- With 5G, IoT and edge computing - how much data are we talking about?
- What will be the first applications leading to collaborative data-intelligence streaming?
- How can low latency microservices and AI quickly extract insights from large amounts of data?
- What are the emerging requirements for scalable stream storage - from peta to zeta?
- How are yesterday’s object-based batch analytic processing (Hadoop) and today’s streaming messaging capabilities (Apache Kafka and RabbitMQ) work together?
- What are the best approaches for getting data from the Edge to the Cloud?

Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay mean that a variety of devices and technologies also have come into wide use.
In this talk we look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it.

The pandemic has taught data professionals one essential thing. Data is like water when it escapes; it reaches every aspect of the community it inhabits. This fact becomes apparent when the general public has access to statistics, assessments, analysis and even medical journals related to the pandemic, at a scale never seen before.
Insight understands information in context to the degree that you can gain an understanding beyond just the facts presented and instead make reasonable predictions and suppositions about new instances of that data.
Having access to data does not automatically grant the reader knowledge of how to interpret that data or the ability to derive insight. It is even challenging to judge the accuracy or value in that data.
The skill required is known as data literacy, and in this presentation, we will look at how access to one data source will inevitably drive the need to access more.

NVMe over Fabrics technology is gaining momentum and getting more tractions in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP. How do we optimize NVMe over Fabrics performance with different Ethernet transports?
This discussion won’t tell you which transport is the best. Instead we unfold the performance of each transport and tell you what it would take for each transport to get the best performance, so that you can make the best choice for your transport for NVMe over Fabrics solutions.
