Last month, the SNIA
Networking Storage Forum made sense of the “wild west” of programming
frameworks, covering xPUs, GPUs and computational storage devices at our live
webcast, “You’ve
Been Framed! An Overview of xPU, GPU & Computational Storage Programming
Frameworks.” It was an excellent overview of what’s happening in this space.
There was a lot to digest, so our stellar panel of experts has
taken the time to answer the questions from our live audience in this blog.
Q. Why is it important to have open-source programming frameworks?
A. Open-source frameworks enable
community support and partnerships beyond what proprietary frameworks support.
In many cases they allow ISVs and end users to write one integration that works
with multiple vendors.
Q. Will different accelerators require different frameworks or can
one framework eventually cover them all?
A. Different frameworks support
accelerator attributes and specific applications. Trying to build a single
framework that does the job of all the existing frameworks and covers all
possible use case would be extremely complex and time consuming. In the end it
would not produce the best results. Having separate frameworks that can
co-exist is a more efficient and effective approach. That said, providing a well-defined
hardware abstraction layer does complement different programming frameworks and
can allow one framework to support
different types of accelerators.
Q. Is there a benefit to standardization at the edge?
A. The edge is a universal term that has
many different definitions, but in this example can be referred to as a network
of end points where data is generated, collected and processed. Standardization
helps with developing a common foundation that can be referenced across
application domains, and this can make it easier to deploy different types of
accelerators at the edge.
Q. Does adding a new programming framework in computational
storage help to alleviate current infrastructure bottlenecks?
A. The SNIA Computational Storage API and TP4091
programming framework enables a standard programming approach over proprietary
methods that may be vendor limited. The computational storage value proposition
significantly reduces resource constraints while the programming framework
supports improved resource access at the application layer. By making it easier
to deploy computational storage, these frameworks may relieve some types of
infrastructure bottlenecks.
Q. Do these programming frameworks typically operate
at a low level or high level?
A. They operate at both. The
goal of programming frameworks is to operate at the application resource
management level with high level command calls that can initiate underlying
hardware resources. They typically engage the underlying hardware resources using
lower-level APIs or drivers.
Q. How does one determine which framework is best for
a particular task?
A. Framework selection
should be addressed by which accelerator type is best suited to run the workload.
Additionally, when multiple frameworks could apply, the decision on which to
use would depend on the implementation details of the workload components.
Multiple frameworks have been created and evolve because of this fact. There is
not always a single answer to the question. The key idea motivating this
webinar was to increase awareness about the frameworks available so that people
can answer this question for themselves.
Q. Does using an open-source framework generally give
you better or worse performance than using other programming options?
A. There is usually no
significant performance difference between open source and proprietary
frameworks, however the former is more relatively adaptable and scalable by the
interested open-source community. A proprietary framework might offer
better performance or access to a few more features, but usually works only
with accelerators from one vendor.
Q. I would like to hear more on accelerators to replace vSwitches.
How are these different from NPUs?
A. Many of these accelerators include the ability to accelerate a
virtual network switch (vSwitch) using purpose-built silicon as one of several tasks
they can accelerate, and these accelerators are usually deployed inside a
server to accelerate the networking instead of running the vSwitch on the
server’s general-purpose CPU. A Network Processing Unit (NPU) is also an
accelerator chip with purpose-built silicon but it typically accelerates only
networking tasks and is usually deployed inside a switch, router, load balancer
or other networking appliance instead of inside a server.
Q. I would have liked to have seen a slide defining GPU and DPU
for those new to the technology.
A. SNIA has been working hard to help educate on this topic. A
good starting point is our “What is an xPU”
definition. There are additional resources on that page including the first
webcast we did on this topic “SmartNICs
to xPUs: Why is the Use of Accelerators Accelerating.” We encourage you to check them out.
Q. How do computational storage devices (CSD) deal with “data
visibility” issues when the drives are abstracted behind a RAID stripe
(e.g. RAID0, 5, 6). Is it expected that a CSD will never live behind such an
abstraction?
A. The CSD can operate as a standard
drive under RAID as well as a drive with a complementary CSP (computational
storage processor, re: CS
Architecture Spec 1.0). If it is deployed under a RAID controller, then the
RAID hardware or software would need to understand the computational
capabilities of the CSD in order to take full advantage of them.
Q. Are any of the major OEM storage vendors (NetApp / Dell EMC /
HPE / IBM, etc.) currently offering Computational Storage capable arrays?
A. A number of OEMs are offering arrays with
compute resources that reside with data. The computational storage initiative
that is promoted by SNIA provides a common
reference architecture and programming model that may be referenced by developers
and end customers.
Leave a Reply