neonΒΆ

Release:1.9.0+04e2f8f
Date:May 03, 2017

neon is Intel Nervana ‘s reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.

Features include:

  • Support for commonly used models including convnets, RNNs, LSTMs, and autoencoders. You can find many pre-trained implementations of these in our model zoo
  • Tight integration with our state-of-the-art GPU kernel library
  • 3s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 32 hrs)
  • Basic automatic differentiation support
  • Framework for visualization
  • Swappable hardware backends: write code once and deploy on CPUs, GPUs, or Nervana hardware

New features in this release:

  • Add support for 3D deconvolution
  • Generative Adversarial Networks (GAN) implementation, and MNIST DCGAN example, following GoodFellow 2014 (http://arXiv.org/abs/1406.2661)
  • Implement Wasserstein GAN cost function and make associated API changes for GAN models
  • Add a new benchmarking script with per-layer timings
  • Add weight clipping for GDM, RMSProp, Adagrad, Adadelta and Adam optimizers
  • Make multicost an explicit choice in mnist_branch.py example
  • Enable NMS kernels to work with normalized boxes and offset
  • Fix missing links in api.rst [#366]
  • Fix docstring for –datatype option to neon [#367]
  • Fix perl shebang in maxas.py and allow for build with numpy 1.12 [#356]
  • Replace os.path.join for Windows interoperability [#351]
  • Update aeon to 0.2.7 to fix a seg fault on termination
  • See more in the change log.

We use neon internally at Nervana to solve our customers’ problems in many domains. Consider joining us. We are hiring across several roles. Apply here!