Complexities of Object Storage Compatibility Q&A

Gregory Touretsky

Jan 26, 2024

title of post
72% of organizations have encountered incompatibility issues between various object storage implementations according to a poll at our recent SNIA Cloud Storage Technologies Initiative webinar, “Navigating the Complexities of Object Storage Compatibility.” If you missed the live presentation or you would like to see the answers to the other poll questions we asked the audience, you can view it on-demand at the SNIA Educational Library. The audience was highly-engaged during the live event and asked several great questions. Here are answers to them all. Q. Do you see the need for fast object storage for AI kind of workloads? A. Yes, the demand for fast object storage in AI workloads is growing. Initially, object storage was mainly used for backup or archival purposes. However, its evolution into Data Lakes and the introduction of features like the S3 SELECT API have made it more suitable for data analytics. The launch of Amazon's S3 Express, a faster yet more expensive tier, is a clear indication of this trend. Other vendors are following suit, suggesting a shift towards object storage as a primary data storage platform for specific workloads. Q. As Object Storage becomes more prevalent in the primary storage space, could you talk about data protection, especially functionalities like synchronous replication and multi-site deployments - or is your view that this is not needed for object storage deployments? A. Data protection, including functionalities like synchronous replication and multi-site deployments, is essential for object storage, especially as it becomes more prevalent in primary storage. Various object storage implementations address this differently. For instance, Amazon S3 supports asynchronous replication. Azure ZRS (Zone-redundant storage) offers something akin to synchronous replication within a specific geographical area. Many on-premises solutions provide multi-site deployment and replication capabilities. It's crucial for vendors to offer distinct features and value additions, giving customers a range of choices to best meet their specific requirements. Ultimately, customers must define their data availability and durability needs and select the solution that aligns with their use case. Q. Regarding polling question #3 during the webinar, why did the question only ask “above 10PB?” We look for multi-PB like 100PB ... does this mean Object Storage is not suitable for multi PB? A. Object storage is inherently scalable and can support deployments ranging from petabyte to exabyte scale. However, scalability can vary based on specific implementations. Each object storage solution may have its own limits in terms of capacity. It's important to review the details of each solution to ensure it meets your specific needs for multi-petabyte scale deployments. Q. Is Wasabi 100% Compatible with Amazon S3? A. While we typically avoid discussing specific vendors in a general forum, it's important to note that most 'S3-compatible' object storage implementations have some discrepancies when compared to Amazon S3. These differences can vary in significance. Therefore, we always recommend testing your actual workload against the specific object storage solution to identify any critical issues or incompatibilities. Q. What are the best ways to see a unified view of different types of storage -- including objects, file and blocks? This may be most relevant for enterprise-wide data tracking and multi-cloud deployments. A. There are various solutions available from different vendors that offer visibility into multiple types of storage, including object, file, and block storage. These solutions are particularly useful for enterprise-wide data management and multi-cloud deployments. However, this topic extends beyond the scope of our current discussion. SNIA might consider addressing this in a separate, dedicated webinar in the future. Q. Is there any standard object storage implementation against which the S3 compatibility would be defined? A. Amazon S3 serves as a de facto standard for object storage implementation. Independent software vendors (ISVs) can decide the degree of compatibility they want to achieve with Amazon S3, including which features to implement and to what extent. The objective isn't necessarily to achieve identical functionality across all implementations, but rather for each ISV to be cognizant of the specific differences and potential incompatibilities in their own solutions. Being aware of these discrepancies is key, even if complete compatibility isn't achieved. Q. With the introduction of directory buckets, do you anticipate vendors picking up compatibility there as well or maintaining a strictly flat namespace? A. That's an intriguing question. We are putting together an on-going object storage forum, which will delve into more in follow-up calls, and will serve as a platform for these kinds of discussions. We anticipate addressing not only the concept of directory buckets versus a flat namespace, but also exploring other ideas like performance enhancements and alternate transport layers for S3. This forum is intended to be a collaborative space for discussing future directions in object storage. If you’re interested, contact the cloudtwg@snia.org. Q. How would an incompatibility be categorized as something that is important for clients vs. just something that doesn't meet the AWS spec/behavior? A. Incompatibilities should be assessed based on the specific needs and priorities of each implementor. While we don't set universal compatibility goals, it's up to every implementor to determine how closely they align with S3 or other protocols. They must decide whether to address any discrepancies in behavior or functionality based on their own objectives and their clients' requirements. Essentially, the significance of an incompatibility is determined by its impact on the implementor's goals and client needs. Q. Have customers experienced incompatibilities around different SDKs with regard to HA behaviors? Load balancers vs. round robin DNS vs. other HA techniques on-prem and in the cloud? A. Yes, customers do encounter incompatibilities related to different SDKs, particularly concerning high availability (HA) behaviors. Object storage encompasses more than just APIs; it also involves implementation choices like load balancing decisions and HA techniques. Discrepancies often arise due to these differences, especially when object storage solutions are deployed within a customer's data center and need to integrate with the existing networking infrastructure. These incompatibilities can be due to various factors, including whether load balancing is handled through round robin DNS, dedicated load balancers, or other HA techniques, either on-premises or in the cloud. Q. Any thoughts on keeping pace with AWS as they evolve the S3 API? I'm specifically thinking about the new Directory Bucket type and the associated API changes to support hierarchy. A. We at the SNIA Cloud Storage Technical Work Group are in dialogue with Amazon and are encouraging their participation in our planned Plugfest at SDC’24. Their involvement would be invaluable in helping us anticipate upcoming changes and understand new developments, such as the Directory Bucket type and its associated API changes. This new variation of S3 from Amazon, which differs from the original implementation, underscores the importance of compatibility testing. While complete compatibility may not always be achievable, it's crucial for ISVs to be fully aware of how their implementations differ from S3's evolving standards. Q. When it comes to object store data protection with backup software, do you see also some incompatibilities with recovered data? A. When data is backed up to an object storage system, there's a fundamental expectation that it can be reliably retrieved later. This reliability is a cornerstone of any storage platform. However, issues can arise when data is initially stored in one specific object storage implementation and later transferred to a different one. If this transfer isn't executed in accordance with the backup software provider's requirements, it could lead to difficulties in accessing the data in the future. Therefore, careful planning and adherence to recommended practices are crucial during any data migration process to prevent such compatibility issues. The SNIA Cloud Storage Technical Work Group is actively working on this topic. If you want to get involved, reach out at cloudtwg@snia.org and follow us @sniacloud_com

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Complexities of Object Storage Compatibility Q&A

Gregory Touretsky

Jan 26, 2024

title of post
72% of organizations have encountered incompatibility issues between various object storage implementations according to a poll at our recent SNIA Cloud Storage Technologies Initiative webinar, “Navigating the Complexities of Object Storage Compatibility.” If you missed the live presentation or you would like to see the answers to the other poll questions we asked the audience, you can view it on-demand at the SNIA Educational Library. The audience was highly-engaged during the live event and asked several great questions. Here are answers to them all. Q. Do you see the need for fast object storage for AI kind of workloads? A. Yes, the demand for fast object storage in AI workloads is growing. Initially, object storage was mainly used for backup or archival purposes. However, its evolution into Data Lakes and the introduction of features like the S3 SELECT API have made it more suitable for data analytics. The launch of Amazon’s S3 Express, a faster yet more expensive tier, is a clear indication of this trend. Other vendors are following suit, suggesting a shift towards object storage as a primary data storage platform for specific workloads. Q. As Object Storage becomes more prevalent in the primary storage space, could you talk about data protection, especially functionalities like synchronous replication and multi-site deployments – or is your view that this is not needed for object storage deployments? A. Data protection, including functionalities like synchronous replication and multi-site deployments, is essential for object storage, especially as it becomes more prevalent in primary storage. Various object storage implementations address this differently. For instance, Amazon S3 supports asynchronous replication. Azure ZRS (Zone-redundant storage) offers something akin to synchronous replication within a specific geographical area. Many on-premises solutions provide multi-site deployment and replication capabilities. It’s crucial for vendors to offer distinct features and value additions, giving customers a range of choices to best meet their specific requirements. Ultimately, customers must define their data availability and durability needs and select the solution that aligns with their use case. Q. Regarding polling question #3 during the webinar, why did the question only ask “above 10PB?” We look for multi-PB like 100PB … does this mean Object Storage is not suitable for multi PB? A. Object storage is inherently scalable and can support deployments ranging from petabyte to exabyte scale. However, scalability can vary based on specific implementations. Each object storage solution may have its own limits in terms of capacity. It’s important to review the details of each solution to ensure it meets your specific needs for multi-petabyte scale deployments. Q. Is Wasabi 100% Compatible with Amazon S3? A. While we typically avoid discussing specific vendors in a general forum, it’s important to note that most ‘S3-compatible’ object storage implementations have some discrepancies when compared to Amazon S3. These differences can vary in significance. Therefore, we always recommend testing your actual workload against the specific object storage solution to identify any critical issues or incompatibilities. Q. What are the best ways to see a unified view of different types of storage — including objects, file and blocks? This may be most relevant for enterprise-wide data tracking and multi-cloud deployments. A. There are various solutions available from different vendors that offer visibility into multiple types of storage, including object, file, and block storage. These solutions are particularly useful for enterprise-wide data management and multi-cloud deployments. However, this topic extends beyond the scope of our current discussion. SNIA might consider addressing this in a separate, dedicated webinar in the future. Q. Is there any standard object storage implementation against which the S3 compatibility would be defined? A. Amazon S3 serves as a de facto standard for object storage implementation. Independent software vendors (ISVs) can decide the degree of compatibility they want to achieve with Amazon S3, including which features to implement and to what extent. The objective isn’t necessarily to achieve identical functionality across all implementations, but rather for each ISV to be cognizant of the specific differences and potential incompatibilities in their own solutions. Being aware of these discrepancies is key, even if complete compatibility isn’t achieved. Q. With the introduction of directory buckets, do you anticipate vendors picking up compatibility there as well or maintaining a strictly flat namespace? A. That’s an intriguing question. We are putting together an on-going object storage forum, which will delve into more in follow-up calls, and will serve as a platform for these kinds of discussions. We anticipate addressing not only the concept of directory buckets versus a flat namespace, but also exploring other ideas like performance enhancements and alternate transport layers for S3. This forum is intended to be a collaborative space for discussing future directions in object storage. If you’re interested, contact the cloudtwg@snia.org. Q. How would an incompatibility be categorized as something that is important for clients vs. just something that doesn’t meet the AWS spec/behavior? A. Incompatibilities should be assessed based on the specific needs and priorities of each implementor. While we don’t set universal compatibility goals, it’s up to every implementor to determine how closely they align with S3 or other protocols. They must decide whether to address any discrepancies in behavior or functionality based on their own objectives and their clients’ requirements. Essentially, the significance of an incompatibility is determined by its impact on the implementor’s goals and client needs. Q. Have customers experienced incompatibilities around different SDKs with regard to HA behaviors? Load balancers vs. round robin DNS vs. other HA techniques on-prem and in the cloud? A. Yes, customers do encounter incompatibilities related to different SDKs, particularly concerning high availability (HA) behaviors. Object storage encompasses more than just APIs; it also involves implementation choices like load balancing decisions and HA techniques. Discrepancies often arise due to these differences, especially when object storage solutions are deployed within a customer’s data center and need to integrate with the existing networking infrastructure. These incompatibilities can be due to various factors, including whether load balancing is handled through round robin DNS, dedicated load balancers, or other HA techniques, either on-premises or in the cloud. Q. Any thoughts on keeping pace with AWS as they evolve the S3 API? I’m specifically thinking about the new Directory Bucket type and the associated API changes to support hierarchy. A. We at the SNIA Cloud Storage Technical Work Group are in dialogue with Amazon and are encouraging their participation in our planned Plugfest at SDC’24. Their involvement would be invaluable in helping us anticipate upcoming changes and understand new developments, such as the Directory Bucket type and its associated API changes. This new variation of S3 from Amazon, which differs from the original implementation, underscores the importance of compatibility testing. While complete compatibility may not always be achievable, it’s crucial for ISVs to be fully aware of how their implementations differ from S3’s evolving standards. Q. When it comes to object store data protection with backup software, do you see also some incompatibilities with recovered data? A. When data is backed up to an object storage system, there’s a fundamental expectation that it can be reliably retrieved later. This reliability is a cornerstone of any storage platform. However, issues can arise when data is initially stored in one specific object storage implementation and later transferred to a different one. If this transfer isn’t executed in accordance with the backup software provider’s requirements, it could lead to difficulties in accessing the data in the future. Therefore, careful planning and adherence to recommended practices are crucial during any data migration process to prevent such compatibility issues. The SNIA Cloud Storage Technical Work Group is actively working on this topic. If you want to get involved, reach out at cloudtwg@snia.org and follow us @sniacloud_com

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s All About Cloud Object Storage Interoperability

Michael Hoard

Dec 11, 2023

title of post
Object storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements. The SNIA Cloud Storage Technologies Initiative, together with the SNIA Cloud Storage Technical Work Group, is working to address the issues of cloud object storage complexity and interoperability. We’re kicking off 2024 with two exciting initiatives: 1) a webinar on June 9, 2024, and 2) a Plugfest in September of 2024. Here are the details: Webinar: Navigating the Complexities of Object Storage Compatibility In this webinar, we'll highlight real-world incompatibilities found in various object storage implementations. We'll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications. This analysis is based on years of experience with implementation, deployment, and evaluation of a wide range of object storage systems on the market. Attendees will leave with a deeper understanding of the challenges around compatibility and how to address them in their own applications. Register here to join us on January 9, 2024. Plugfest: Cloud Object Storage Plugfest SNIA is planning an open collaborative Cloud Object Storage Plugfest co-located at SNIA Storage Developer Conference (SDC) scheduled for September 2024 to work on improving cross-implementation compatibility for client and/or server implementations of private and public cloud object storage solutions. This endeavor is designed to be an independent, vendor-neutral effort with broad industry support, focused on a variety of solutions, including on-premises and in the cloud. This Plugfest aims to reduce compatibility issues, thus improving customer experience and increasing the adoption rate of object storage solutions. Click here to let us know if you're interested. We hope you will consider participating in both of these initiatives!    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s All About Cloud Object Storage Interoperability

Michael Hoard

Dec 11, 2023

title of post
Object storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements. The SNIA Cloud Storage Technologies Initiative, together with the SNIA Cloud Storage Technical Work Group, is working to address the issues of cloud object storage complexity and interoperability. We’re kicking off 2024 with two exciting initiatives: 1) a webinar on June 9, 2024, and 2) a Plugfest in September of 2024. Here are the details: Webinar: Navigating the Complexities of Object Storage Compatibility In this webinar, we’ll highlight real-world incompatibilities found in various object storage implementations. We’ll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications. This analysis is based on years of experience with implementation, deployment, and evaluation of a wide range of object storage systems on the market. Attendees will leave with a deeper understanding of the challenges around compatibility and how to address them in their own applications. Register here to join us on January 9, 2024. Plugfest: Cloud Object Storage Plugfest SNIA is planning an open collaborative Cloud Object Storage Plugfest co-located at SNIA Storage Developer Conference (SDC) scheduled for September 2024 to work on improving cross-implementation compatibility for client and/or server implementations of private and public cloud object storage solutions. This endeavor is designed to be an independent, vendor-neutral effort with broad industry support, focused on a variety of solutions, including on-premises and in the cloud. This Plugfest aims to reduce compatibility issues, thus improving customer experience and increasing the adoption rate of object storage solutions. Click here to let us know if you’re interested. We hope you will consider participating in both of these initiatives!    

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Multi-cloud Use Has Become the Norm

Alex McDonald

Feb 15, 2022

title of post
Multiple clouds within an organization have become the norm. This strategy enables organizations to reduce risk and dependence on a single cloud platform. The SNIA Cloud Storage Technologies Initiative (CSTI) discussed this topic at length at our live webcast last month “Why Use Multiple Clouds?” We polled our webcast attendees on their use of multiple clouds and here’s what we learned about the cloud platforms that comprise their multi-cloud environments:
Our expert presenters, Mark Carlson and Gregory Touretsky, also discussed the benefits of a storage abstraction layer that insulates the application from the underlying cloud provider’s interfaces, something the SNIA Cloud Data Management Interface (CDMI™) 2.0 enables. Cost is always an issue with cloud. One of our session attendees asked: do you have an example of a cloud vendor who does not toll for egress? There may be a few vendors that don’t charge, but one we know of that is toll free on egress is Seagate’s Lyve Cloud; they only charge for used capacity. We were also challenged on the economics and increased cost due to the perceived complexity of multi-cloud specifically, security. While it’s true that there’s no standard security model for multi-cloud, there are 3rd party security solutions that can simplify its management, something we covered in the webinar. If you missed this webinar, you can access it on-demand and get a copy of the presentation slides in the SNIA Educational Library

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post

According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:

  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections

The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions.

Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Has Hybrid Cloud Reached a Tipping Point?

Michelle Tidwell

Mar 13, 2019

title of post
According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. Has hybrid cloud reached a tipping point? Find out on April 23, 2019 at our live webcast “The Hybrid Cloud Tipping Point” when the SNIA CSTI welcomes ESG senior analyst, Scott Sinclair, who will share research on current cloud trends, covering:
  • Key drivers behind IT complexity
  • IT spending priorities
  • Multi-cloud & hybrid cloud adoption drivers
  • When businesses are moving workloads from the cloud back on-premises
  • Top security and cost challenges
  • Future cloud projections
The research presentation will be followed by a panel discussion with Scott Sinclair and my SNIA cloud colleagues, Alex McDonald, Mike Jochimsen and Eric Lakin. We will be on-hand on the 23rd to answer questions. Register today. We hope to see you there.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

What the “T” Means in SNIA Cloud Storage Technologies

Alex McDonald

Aug 15, 2018

title of post
The SNIA Cloud Storage Initiative (CSI) has had a rebrand; we’ve added a T for Technologies into our name, and we’re now officially the Cloud Storage Technologies Initiative (CSTI). That doesn’t seem like a significant change, but there’s a good reason. Our old name reflected the push to getting acceptance of cloud storage, and that specific cloud storage debate has been won, and big time. One relatively small cloud service provider is currently storing 400PB of clients’ data. Twitter alone consumes 300PB of data on Google’s cloud offering. Facebook, Amazon, AliBaba, Tencent – all have huge data storage numbers. Enterprises of every size are storing data in the cloud. That’s why we added the word “technologies.” The expanded charter and new name reflect the need to support the evolving cloud business models and architectures such as OpenStack, software defined storage, Kubernetes and object storage. It includes data services, orchestration and management, understanding hyperscale requirements and the role standards play. So what do we do? The CSTI is an active group that publishes articles and white papers, speaks at industry conferences and presents at highly-rated webcasts that have been viewed by thousands. You can learn more about the CSTI and check out the Infographic for highlights on cloud storage trends and CSTI activities. If you’re interested in cloud storage technologies, I encourage you to consider joining our group. We have multiple membership options for established vendors, startups, educational institutions, even individuals. Learn more about CSTI membership here.

Olivia Rhye

Product Manager, SNIA

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • What are the steps that can be followed to import data securely? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Simplifying the Movement of Data from Cloud to Cloud

Alex McDonald

Jul 5, 2018

title of post
We are increasingly living in a multi-cloud world, with potentially multiple private, public and hybrid cloud implementations supporting a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely, from one cloud to another. That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement. On August 7, 2018, the SNIA Cloud Storage Technologies Initiative will tackle this issue in a live webcast, “Cloud Mobility and Data Movement.” We will explore some of these data movement and mobility issues and include real-world examples from the University of Michigan. We’ll discus:
  • How do we secure data both at-rest and in-transit?
  • Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
  • How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
  • Should the application of the data influence how (and even if) we move the data?
  • How can data in the cloud be leveraged for multiple use cases?
Register now for this live webcast. Our SNIA experts will be on-hand to answer you questions.    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to Cloud Mobility