Running models

With the virtual environment activated, there are two ways to run models through neon. The first is to simply execute the python script containing the model (with -b mkl), as mentioned before:

examples/ # equivalent to examples/ -b mkl

This will run the multilayer perceptron (MLP) model and print the final misclassification error after 10 training epochs. On the first run, neon will download the MNIST dataset. It will create a ~/nervana directory where the raw datasets are kept. The data directory can be controlled with the -w flag.

The second method is to specify the model in a YAML file. YAML is a widely-used markup language. For examples, see the YAML files in the examples folder. To run the YAML file for the MLP example, enter from the neon repository directory:

neon examples/mnist_mlp.yaml

In a YAML file, the mkl backend can be specified by adding backend: mkl.


Both methods accept command line arguments to configure how you would like to run the model. For a full list, type neon --help in the command line. Some commonly used flags include: