Basic concepts

Bridge to nGraph graph construction API

A framework bridge connects to the nGraph graph construction API

To understand how a data science framework (TensorFlow, PyTorch, PaddlePaddle*, and others) can unlock acceleration available in the nGraph Compiler, it helps to familiarize yourself with some basic concepts.

We use the term bridge to describe code that connects to any nGraph device backend(s) while maintaining the framework’s programmatic or user interface. We have a bridge for the TensorFlow framework. We also have a PaddlePaddle* bridge. Intel previously contributed work to an MXNet bridge; however, support for the MXNet bridge is no longer active.

ONNX on its own is not a framework; it can be used with nGraph’s Python API to import and execute ONNX models.

Because it is framework agnostic (providing opportunities to optimize at the graph level), nGraph can do the heavy lifting required by many popular workloads without any additional effort of the framework user. Optimizations that were previously available only after careful integration of a kernel or hardware-specific library are exposed via the Core graph construction API

The illustration above shows how this works.

While a Deep Learning framework is ultimately meant for end-use by data scientists, or for deployment in cloud container environments, nGraph’s Core ops are designed for framework builders themselves. We invite anyone working on new and novel frameworks or neural network designs to explore our highly-modularized stack of components.

Please read the other/index section for other framework-agnostic configurations available to users of the nGraph Compiler stack.

Translation flow to an nGraph function graph