neonΒΆ

Release:2.0.0+41e746a
Date:Jun 27, 2017

neon is Intel Nervana ‘s reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.

Features include:

  • Support for commonly used models including convnets, RNNs, LSTMs, and autoencoders. You can find many pre-trained implementations of these in our model zoo
  • Tight integration with our state-of-the-art GPU kernel library
  • 3s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 32 hrs)
  • Basic automatic differentiation support
  • Framework for visualization
  • Swappable hardware backends: write code once and deploy on CPUs, GPUs, or Nervana hardware

New features in this release:

  • Added support for MKL backend (-b mkl) on Linux, which boosts neon CPU performance significantly
  • Added WGAN model examples for LSUN and MNIST data
  • Enabled WGAN and DCGAN model examples for Python3
  • Added fix (using file locking) to prevent race conditions running multiple jobs on the same machine with multiple GPUs
  • Added functionality to display some information about hardware, OS and model used
  • Updated appdirs to 1.4.3 to be compatibile on Centos 7.3 for appliance
  • See more in the change log.

We use neon internally at Intel Nervana to solve our customers’ problems in many domains. Consider joining us. We are hiring across several roles. Apply here!