Welcome to neon! The typical workflow for a deep learning model is as follows:
Generate a backend
The backend defines where computations are executed in neon. We support both CPU and GPU (Pascal, Maxwell or Kepler architectures) backends.
See neon backend
Neon supports loading of both common and custom datasets. Data should be loaded as a python iterator, providing one minibatch of data at a time during training.
Specify model architecture (layers, activation functions, weight initializers)
Create your model by providing a list of layers. For layers with weights, provide a function to initialize the weights prior to training.
To train a model, provide the training data (as an iterator), cost function, and an optimization algorithm for updating the model’s weights. To modify the learning rate over the training time, provide a learning schedule.
Evaluate a trained model based on a validation dataset and a provided Metric.
Neon currently supports the following:
- Backends -
- Images: MNIST, CIFAR-10, ImageNet 1K, PASCAL VOC, Mini-Places2
- Text: IMDB, Penn Treebank, Shakespeare Text, bAbI, Hutter-prize
- Video: UCF101
- Others: flickr8k, flickr30k, COCO
- Custom datasets
- Initializers -
- Optimizers -
Gradient Descent with Momentum,
- Activations -
- Layers -
Long Short-Term Memory,
Gated Recurrent Unit,
Local Response Normalization,
- Costs -
Binary Cross Entropy,
Multiclass Cross Entropy,
Sum of Squares Error
- Metrics - Misclassification (