Date:Sep 21, 2016

neon is Nervana ’s Python-based deep learning library. It provides ease of use while delivering the highest performance.

Features include:

  • Support for commonly used models including convnets, RNNs, LSTMs, and autoencoders. You can find many pre-trained implementations of these in our model zoo
  • Tight integration with our state-of-the-art GPU kernel library
  • 3s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 32 hrs)
  • Basic automatic differentiation support
  • Framework for visualization
  • Swappable hardware backends: write code once and deploy on CPUs, GPUs, or Nervana hardware

New features in this release:

  • Faster RCNN model
  • Sequence to Sequence container and char_rae recurrent autoencoder model
  • Reshape Layer that reshapes the input [#221]
  • Pip requirements in requirements.txt updated to latest versions [#289]
  • Remove deprecated data loaders and update docs
  • Use NEON_DATA_CACHE_DIR envvar as archive dir to store DataLoader ingested data
  • Eliminate type conversion for FP16 for CUDA compute capability >= 5.2
  • Use GEMV kernels for batch size 1
  • Alter delta buffers for nesting of merge-broadcast layers
  • Support for ncloud real-time logging
  • Add fast_style Makefile target
  • Fix Python 3 builds on Ubuntu 16.04
  • Run for sysinstall to generate [#282]
  • Fix broken link in mnist docs
  • Fix conv/deconv tests for CPU execution and fix i32 data type
  • Fix for average pooling with batch size 1
  • Change default scale_min to allow random cropping if omitted
  • Fix yaml loading
  • Fix bug with image resize during injest
  • Update references to the ModelZoo and neon examples to their new locations
  • See change log.

We use neon internally at Nervana to solve our customers’ problems in many domains. Consider joining us. We are hiring across several roles. Apply here!