Costs

Cost refers to the loss function used to train the model. Each cost inherits from Cost. Neon currently supports the following cost functions:

Name Description
neon.transforms.CrossEntropyBinary \(-t\log(y)-(1-t)\log(1-y)\)
neon.transforms.CrossEntropyMulti \(\sum t log(y)\)
neon.transforms.SumSquared \(\sum_i (y_i-t_i)^2\)
neon.transforms.MeanSquared \(\frac{1}{N}\sum_i (y_i-t_i)^2\)
neon.transforms.SmoothL1Loss Smooth \(L_1\) loss (see Girshick 2015)

To create new cost functions, subclass from neon.transforms.Cost and implement two methods: __call__() and bprop(). Both methods take as input:

  • y (Tensor or OpTree): Output of model
  • t (Tensor or OpTree): True targets corresponding to y

and returns an OpTree with the cost and the derivative for __call__() and bprop() respectively.

Metrics

We define metrics to evaluate the performance of a trained model. Similar to costs, each metric takes as input the output of the model y and the true targets t. Metrics may be initialized with additional parameters. Each metric returns a numpy array of the metric. Neon supports the following metrics:

Name Description
neon.transforms.LogLoss \(\log\left(\sum y*t\right)\)
neon.transforms.Misclassification Incorrect rate
neon.transforms.TopKMisclassification Incorrect rate from Top \(K\) guesses
neon.transforms.Accuracy Correct Rate
neon.transforms.PrecisionRecall Class averaged precision (item 0) and recall (item 1) values.
neon.transforms.ObjectDetection Correct rate (item 0) and L1 loss on the bounding box (item 1)

To create your own metric, subclass from Metric and implement the __call__() method, which takes as input Tensors y and t and returns a numpy array of the resulting metrics. If you need to allocate buffer space for the backend to store calculations, or accept additional parameters, remember to do so in the class constructor.