Date:Oct 27, 2017

neon is Intel Nervana ‘s reference deep learning framework committed to best performance on all hardware. Designed for ease-of-use and extensibility.

Features include:

  • Support for commonly used models including convnets, RNNs, LSTMs, and autoencoders. You can find many pre-trained implementations of these in our model zoo
  • Tight integration with our state-of-the-art GPU kernel library
  • 3s/macrobatch (3072 images) on AlexNet on Titan X (Full run on 1 GPU ~ 32 hrs)
  • Basic automatic differentiation support
  • Framework for visualization
  • Swappable hardware backends: write code once and deploy on CPUs, GPUs, or Nervana hardware

New features in this release:

  • Optimized DeepSpeech2 MKL backend performance (~7X improvement over the CPU backend)
  • Fused convolution and bias layer which significantly boosted AlexNet and VGG performance on Intel architectures with MKL backend
  • Made SSD and Faster-RNN use VGG weight files in new format
  • Fixed use of reset_cells hyperparameter
  • Fixed MKL backend bug for GAN and Faster-RCNN models
  • See more in the change log.

We use neon internally at Intel Nervana to solve our customers’ problems in many domains. Consider joining us. We are hiring across several roles. Apply here!