Framework & runtime support
One of nGraph’s key features is framework neutrality. We currently support popular deep learning frameworks such as TensorFlow* with stable bridges to pass computational graphs to nGraph. Additionally the nGraph Compiler has functional bridges to PaddlePaddle*. For these frameworks, we have successfully tested functionality with a few deep learning workloads, and we plan to bring stable support for them in the upcoming releases.
To further promote framework neutrality, the nGraph team has been actively contributing to the ONNX project. Developers who already have a "trained" DNN (Deep Neural Network) model can use nGraph to bypass significant framework-based complexity and import it to test or run on targeted and efficient backends with our user-friendly Python-based API.
nGraph is also integrated as an execution provider for ONNX Runtime, which is the first publicly available inference engine for ONNX.
The table below summarizes our current progress on supported frameworks. If you are an architect of a framework wishing to take advantage of speed and multi-device support of nGraph Compiler, please refer to the Framework integration guide section.
|Framework & Runtime||Supported|
|ONNX Runtime 1.0||✔️|
Hardware & backend support
The current release of nGraph primarily provides inference acceleration on CPU. However we also have functional support for more hardware and backends, also including training support. As with the frameworks, we believe in providing freedom to AI developers to deploy their deep learning workloads to the desired hardware without a lock in. We currently have functioning backends for Intel, Nvidia*, and AMD* GPU utilizing PlaidML to compile for codegen and emit OpenCL, OpenGL, LLVM, Cuda, and Metal. Please refer to the Architecture and Features section to learn more about how we plan to take advantage of both solutions using the hybrid transformer and multi-node support.
Additionally, we are excited about providing support for Intel deep learning accelerators such as Intel® Nervana™ Neural Network Processors via nGraph compiler stack.
|Intel® Architecture CPU||✔️|
|Intel® Architecture GPUs||Functional via PlaidML|
|AMD* GPUs||Functional via PlaidML|
|Nvidia* GPUs||Functional via PlaidML|
|Intel® Nervana™ Neural Network Processor for Training (NNP-T)||Functional|
|Intel® Nervana™ Neural Network Processor for Inference (NNP-I)||Functional|
|Upcoming DL accelerators||In Progress|