Overview

Distributed training across multiple nGraph backends

Important

Distributed training is not officially supported in version 0.25; however, the following configuration options have worked for nGraph devices with mixed or limited success in testing.

In the previous section, Derive a trainable model, we described the steps needed to create a “trainable” nGraph model. Here we demonstrate how to train a data parallel model by distributing the graph to more than one device.

Frameworks can implement distributed training with nGraph versions prior to 0.13:

  • Use -DNGRAPH_DISTRIBUTED_ENABLE=OMPI to enable distributed training with OpenMPI. Use of this flag requires that OpenMPI be a pre-existing library in the system. If it’s not present on the system, install OpenMPI version 2.1.1 or later before running the compile.

  • Use -DNGRAPH_DISTRIBUTED_ENABLE=MLSL to enable the option for Intel® Machine Learning Scaling Library for Linux* OS:

    Note

    The Intel® MLSL option applies to Intel® Architecture CPUs (CPU) and Interpreter backends only. For all other backends, OpenMPI is presently the only supported option. We recommend the use of Intel MLSL for CPU backends to avoid an extra download step.

Finally, to run the training using two nGraph devices, invoke

mpirun

To deploy data-parallel training, the AllReduce op should be added after the steps needed to complete the Derive a trainable model; the new code is highlighted below:

See the full code in the examples folder /doc/examples/mnist_mlp/dist_mnist_mlp.cpp.

mpirun -np 2 dist_mnist_mlp