Overview

What is a backend?

In the nGraph Compiler stack, what we call a backend is responsible for function execution and value allocation. A backend can be used to carry out computations from a framework on a CPU, GPU, or ASIC; it can also be used with an Interpreter mode, which is primarily intended for testing, to analyze a program, or to help a framework developer customize targeted solutions.

../../_images/hardware-backends.svg

nGraph also provides a way to use the advanced tensor compiler PlaidML as a backend; you can learn more about this backend and how to build it from source in our documentation: Building nGraph-PlaidML from source.

Backend

Current nGraph support

Future nGraph support

Intel® Architecture Processors (CPUs)

Yes

Yes

Intel® Nervana™ Neural Network Processor (NNPs)

Yes

Yes

AMD GPUs

Yes

Some

How to use?

  1. Create a Backend; think of it as a compiler.

  2. A Backend can then produce an Executable by calling compile.

  3. A single iteration of the executable is executed by calling the call method on the Executable object.

../../_images/execution-interface-run-graphs.svg

The execution interface for nGraph

The nGraph execution API for Executable objects is a simple, five-method interface; each backend implements the following five functions:

  • The create_tensor() method allows the bridge to create tensor objects in host memory or an accelerator’s memory.

  • The write() and read() methods are used to transfer raw data into and out of tensors that reside in off-host memory.

  • The compile() method instructs the backend to prepare an nGraph function for later execution.

  • And, finally, the call() method is used to invoke an nGraph function against a particular set of tensors.

Miscellaneous resources

Additional resources for device or framework-specific configurations:

OpenCL

OpenCL is only needed for the PlaidML; it is not needed when using CPU backend.

  1. Install the latest Linux driver for your system. You can find a list of drivers at https://software.intel.com/en-us/articles/opencl-drivers; You may need to install OpenCL SDK in case of an libOpenCL.so absence.

  2. Any user added to “video” group:

    sudo usermod –a –G video <user_id>
    

    may, for example, be able to find details at the /sys/module/[system]/parameters/ location.

nGraph Bridge from TensorFlow

When specified as the generic backend – either manually or automatically from a framework – NGRAPH defaults to CPU, and it also allows for additional device configuration or selection.

Because nGraph can select backends, specifying the INTELGPU backend as a runtime environment variable also works if one is present in your system:

NGRAPH_TF_BACKEND="INTELGPU"

An axpy.py example is optionally available to test; outputs will vary depending on the parameters specified.

NGRAPH_TF_BACKEND="INTELGPU" python3 axpy.py
  • NGRAPH_INTELGPU_DUMP_FUNCTION – dumps nGraph’s functions in dot format.