SNIA Developer Conference September 15-17, 2025 | Santa Clara, CA
Understanding the diverse mount options in the Linux SMB client can be challenging for both developers and users. This presentation aims to demystify this complexity by providing insights into the rationale behind the extensive list of options, their pros and cons, and how to apply them to common use cases and workloads. From data caching and metadata caching options to security and standard protocol-related configurations and many performance enhancements.
Attendees will gain a comprehensive understanding of how each option impacts SMB file system behavior. Through practical examples and discussions, this talk will equip participants with the knowledge needed to navigate and optimize Linux SMB client mount options effectively.
List and describe various mount options available in the Linux SMB client and the rationale behind different SMB mount options and their typical use cases.
Choose configuration of SMB mount options to optimize performance for specific workloads and optimize Linux SMB client settings for enhanced data and metadata caching.
Evaluate the advantages and disadvantages of different mount options and apply practical examples to configure SMB client mounts effectively in real-world scenarios.
For over nine years, Microsoft Azure has provided completely managed file shares in the cloud.
Azure Files provides SMB3, NFS4.1 and REST based access to file shares.
This talk will present the evolution of architecture of Azure Files to serve applications with higher performance and scale needs, based on Azure Storage architecture under the hood, not on a conventional file system -- let alone NTFS. We will focus on features which provide the availability and reliability guarantees despite the constant din of hardware underneath suffering failures and needing replacement. Leveraging the Continuous Availability features of SMB3, users are able to access always available and highly reliable file shares. We will also deep dive into our approach to achieve security at cloud scale.
This talk will highlight recently added features and the engineering challenges of making significant changes and additions to data schemas and the code that manipulates it, while still allowing access to those many petabytes of data, or breaking the semantics that applications depend on. Additionally, this talk will discuss the significance of receiving an ack for the write requests from Azure File Server, how we build back state to continue serving data when client reconnects and customer focused challenges and how we overcame them.
Understanding the diverse mount options in the Linux SMB client can be challenging for both developers and users. This presentation aims to demystify this complexity by providing insights into the rationale behind the extensive list of options, their pros and cons, and how to apply them to common use cases and workloads. From data caching and metadata caching options to security and standard protocol-related configurations and many performance enhancements.
Attendees will gain a comprehensive understanding of how each option impacts SMB file system behavior. Through practical examples and discussions, this talk will equip participants with the knowledge needed to navigate and optimize Linux SMB client mount options effectively.
Discover the latest advancements in Linux file access, and explore recent enhancements to the Linux SMB3.1.1 client. It has been another great year of progress in improving access to remote storage from Linux whether in the cloud and or the wide variety of SMB3 file servers (Azure, Windows, Samba, ksmbd, Netapp, Macs and many others). Many improvements have been made to securely, reliably and efficiently access remote data. The Linux SMB3.1.1 client, cifs.ko, continues to be one of the most active filesystems in Linux. This presentation will cover new security, performance and reliability features recently added to the Linux client, and also new features you can expect to see over the coming year. Whether accessing data from the smallest devices or the largest (and even the cloud) these improvements are very exciting. Over the past year, new security features have been added, performance has improved, as has support for folios and the new Linux memory management model. Directory caching is now faster and more flexible, more POSIX/Linux compatibility features have been added, support for swapfiles over SMB3.1.1 mounts has improved, as has support for special Linux file types. As we add support for new authentication options for Linux (not just Kerberos and NTLMSSP), and also as we add support for SMB3.1.1 over QUIC (using the exciting new Linux kernel QUIC driver), and stronger and faster packet signing, SMB3.1.1 will be able to better address evolving security requirements, not just on-prem but also in the cloud. The protocol already supports extremely strong encryption options, and also man in the middle attack prevention. This presentation will describe many of these SMB3.1.1 features and improvements, and how to best use them to improve your workloads.
NFS 4.2 introduces significant advancements tailored for high-performance workloads, GPU computing, and distributed storage environments, elevating the capabilities of standards-based modern data centers, clouds, and computational tasks.
In this session we will cover the latest additions to NFS 4.2 as well as discuss upcoming advancements expected to release in the coming year.
The Ceph storage ecosystem currently covers the full range of File, Object and Block access with Ceph-specific protocols. Standard NAS protocols such as SMB and NFS were recently added and expanded to allow client-native protocols to consume this scale out file storage.
The team at Red Hat and then IBM have been looking into ways to enhance our storage stack with an SMB3 gateway to our Ceph File System. Based on the feedback we’ve received on the business side as well as the Open Source community IBM plans to add SMB as a fully integrated component of Ceph - upstream and downstream.
In our talk we want to highlight some of the implementation aspects that need to be addressed for adopting this plan. We want to share the progress we have already achieved for easier management of SMB using Samba, discuss the challenges for SMB clustering and High Availability and finally give an outlook on the upcoming planned features. Our goal is to make SMB a natural choice when connecting clients to the Ceph storage system.
The performance requirements needed to power GPU-based computing use cases for AI/DL and other high-performance workflows are challenged by the performance limitations of legacy file and object storage systems. Typically, such use cases have needed to deploy parallel file systems such as Lustre or others, which require networking and skillsets not typically available in standard Enterprise data centers.
Standards-based parallel file systems such as pNFS v4.2 provide the high-performance needed for such work loads, and do so with commodity hardware, standard Ethernet infrastructure. They also provide the multi-protocol file and object access not typically supported by HPC parallel file systems. PNFS v4.2 architectures used in this way are often called Hyperscale NAS, since they merge very high throughput parallel file system performance with the standard capabilities of enterprise NAS solutions. It is this architecture that is deployed at Meta to feed 24,000 GPUs in its AI Research SuperCluster at 12.5TB per second on commodity hardware and standard Ethernet to power its Llama 2 & 3 large language models (LLMs).
But AI/DL data sets are often distributed across multiple incompatible storage types in one or more locations, including S3 storage at edge locations. Traditionally, to pull S3 data from the edge into such workflows has required deployment of file gateways or other methods to bridge protocols.
This session will look at an architecture that enables data on S3 storage to be automatically integrated into a multi-platform, multi-protocol, multi-site Hyperscale NAS environment seamlessly. By leveraging real-world implementations, the session will highlight how this standards-based approach can enable organizations to leverage conventional enterprise infrastructure with data in place on existing storage of any type to feed GPU-based AI and other high-performance workflows.