Installation

Let’s get you started using neon to build deep learning models!

Requirements

Neon runs on Python 2.7 or Python 3.4+ and we support Linux and Mac OS X machines. Before install, please ensure you have recent versions of the following packages (different system names shown):

Ubuntu OSX Description
python-pip pip Tool to install python dependencies
python-virtualenv (*) virtualenv (*) Allows creation of isolated environments ((*): This is required only for Python 2.7 installs. With Python3: test for presence of venv with python3 -m venv -h)
libhdf5-dev h5py Enables loading of hdf5 formats
libyaml-dev pyaml Parses YAML format inputs
pkg-config pkg-config Retrieves information about installed libraries

Note

To enable neon’s DataLoader, several optional libraries should be installed. For image processing, install OpenCV. For audio and video data, install ffmpeg. We recommend installing with a package manager (e.g. apt-get or homebrew).

Additionally, there are several other libraries.

  • Neon v2.0.0+ by default comes with Intel Math Kernel Library (MKL) support, which enables multi-threading operations on Intel CPU. It is the recommended library to use for best performance on CPU. When installing neon, MKL support will be automatically enabled.
  • (optional) If interested to compare multi-threading performance of MKL optimized neon, install OpenBLAS, then recompile numpy with links to openBLAS (see sample instructions here). While neon will run on the CPU with OpenBLAS, you’ll get better performance using MKL on CPUs or CUDA on GPUs.
  • Enabling neon to use GPUs requires installation of CUDA SDK and drivers. We support Pascal , Maxwell and Kepler GPU architectures, but our backend is optimized for Maxwell GPUs. Remember to add the CUDA path to your environment variables.

For GPU users, remember to add the CUDA path. For example, on Ubuntu:

export PATH="/usr/local/cuda/bin:"$PATH
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/local/lib:"$LD_LIBRARY_PATH

Or on Mac OS X:

export PATH="/usr/local/cuda/bin:"$PATH
export DYLD_LIBRARY_PATH="/usr/local/cuda/lib:"$DYLD_LIBRARY_PATH

Installation

We recommend installing neon within a virtual environment to ensure a self-contained environment. To install neon within an already existing virtual environment, see the System-wide Install section. If you use the Anaconda python distribution, please see the Anaconda Install section. Otherwise, to setup neon in this manner, run the following commands:

git clone https://github.com/NervanaSystems/neon.git
cd neon; make

This will install the files in the neon/.venv/ directory and will use the python version in the default PATH. Note that neon would automatically download the released MKLML library that features MKL support.

To instead force a Python2 or Python3 install, supply this as an optional parameter:

make python2

Or:

make python3

To activate the virtual environment, type

. .venv/bin/activate

You will see the prompt change to reflect the activated environment. To start neon and run the MNIST multi-layer perceptron example (the “Hello World” of deep learning), enter

examples/mnist_mlp.py

For better performance on Intel CPUs, start neon and run the MNIST multi-layer perceptron example with -b mkl

examples/mnist_mlp.py -b mkl

Note

To achieve best performance, we recommend setting KMP_AFFINITY and OMP_NUM_THREADS in this way: export KMP_AFFINITY=compact,1,0,granularity=fine and export OMP_NUM_THREADS=<Number of Physical Cores>. You can set these environment variables in bash and do source ~/.bashrc to activate it. You may need to activate the virtual environment again after sourcing bashrc. For detailed information about KMP_AFFINITY, please read here: https://software.intel.com/en-us/node/522691. We encourage users to experiment with this thread affinity configurations to achieve even better performance.

When you are finished, remember to deactivate the environment

deactivate

Congratulations, you have installed neon! Next, we recommend you learn how to run models in neon and walk through the MNIST multilayer perceptron tutorial.

Virtual Environment

Virtualenv is a python tool that keeps the dependencies and packages required for different projects in separate environments. By default, our install creates a copy of python executable files in the neon/.venv directory. To learn more about virtual environments, see the guide at http://docs.python-guide.org/en/latest/dev/virtualenvs/.

System-wide install

If you would prefer not to use a new virtual environment, neon can be installed system-wide with

git clone https://github.com/NervanaSystems/neon.git
cd neon && make sysinstall

To install neon in a previously existing virtual environment, first activate that environment, then run make sysinstall. Neon will install the dependencies in your virtual environment’s python folder.

Anaconda install

If you have already installed and configured the Anaconda distribution of python, follow the subsequent steps.

First, configure and activate a new conda environment for neon:

conda create --name neon pip
source activate neon

Now clone and run a system-wide install. Since the install takes place inside a conda environment, the dependencies will be installed in your environment folder.

git clone https://github.com/NervanaSystems/neon.git
cd neon && make sysinstall

When complete, deactivate the environment:

source deactivate

Docker

If you would prefer having a containerized installation of neon and its dependencies, the open source community has contributed the following Docker images (note that these are not supported/maintained by Intel Nervana):

Support

For any bugs or feature requests please:

  1. Search the open and closed issues list to see if we’re already working on what you have uncovered.
  2. Check that your issue/request isn’t answered in our Frequently Asked Questions (FAQ) or neon-users Google group.
  3. File a new issue or submit a new pull request if you have some code to contribute. See our contributing guide.
  4. For other questions and discussions please post a message to the neon-users Google group.