neon.optimizers.optimizer.RMSProp

class neon.optimizers.optimizer.RMSProp(stochastic_round=False, decay_rate=0.95, learning_rate=0.002, epsilon=1e-06, gradient_clip_norm=None, gradient_clip_value=None, param_clip_value=None, name=None, schedule=<neon.optimizers.optimizer.Schedule object>)[source]

Bases: neon.optimizers.optimizer.Optimizer

Root Mean Square propagation.

Root Mean Square (RMS) propagation protects against vanishing and exploding gradients. In RMSprop, the gradient is divided by a running average of recent gradients. Given the parameters \(\theta\), gradient \(\nabla J\), we keep a running average \(\mu\) of the last \(1/\lambda\) gradients squared. The update equations are then given by

\[\mu' &= \lambda\mu + (1-\lambda)(\nabla J)^2\]
\[\theta' &= \theta - \frac{\alpha}{\sqrt{\mu + \epsilon} + \epsilon}\nabla J\]

where we use \(\epsilon\) as a (small) smoothing factor to prevent from dividing by zero.

__init__(stochastic_round=False, decay_rate=0.95, learning_rate=0.002, epsilon=1e-06, gradient_clip_norm=None, gradient_clip_value=None, param_clip_value=None, name=None, schedule=<neon.optimizers.optimizer.Schedule object>)[source]

Class constructor.

Parameters:
  • stochastic_round (bool) – Set this to True for stochastic rounding. If False rounding will be to nearest. If True will perform stochastic rounding using default width. Only affects the gpu backend.
  • decay_rate (float) – decay rate of states
  • learning_rate (float) – the multiplication coefficent of updates
  • epsilon (float) – smoothing epsilon to avoid divide by zeros
  • gradient_clip_norm (float, optional) – Target gradient norm. Defaults to None.
  • gradient_clip_value (float, optional) – Value to element-wise clip gradients. Defaults to None.
  • param_clip_value (float, optional) – Value to element-wise clip parameters. Defaults to None.
  • schedule (neon.optimizers.optimizer.Schedule, optional) – Learning rate schedule. Defaults to a constant.

Notes

Only constant learning rate is supported currently.

Methods

__init__([stochastic_round, decay_rate, …]) Class constructor.
clip_gradient_norm(param_list, clip_norm) Returns a scaling factor to apply to the gradients.
clip_value(v[, abs_bound]) Element-wise clip a gradient or parameter tensor to between -abs_bound and +abs_bound.
gen_class(pdict)
get_description([skip]) Returns a dict that contains all necessary information needed to serialize this object.
optimize(layer_list, epoch) Apply the learning rule to all the layers and update the states.
recursive_gen(pdict, key) helper method to check whether the definition
be = None
classnm

Returns the class name.

clip_gradient_norm(param_list, clip_norm)

Returns a scaling factor to apply to the gradients.

The scaling factor is computed such that the root mean squared average of the scaled gradients across all layers will be less than or equal to the provided clip_norm value. This factor is always <1, so never scales up the gradients.

Parameters:
  • param_list (list) – List of layer parameters
  • clip_norm (float, optional) – Target norm for the gradients. If not provided the returned scale_factor will equal 1.
Returns:

Computed scale factor.

Return type:

scale_factor (float)

clip_value(v, abs_bound=None)

Element-wise clip a gradient or parameter tensor to between -abs_bound and +abs_bound.

Parameters:
  • v (tensor) – Tensor of gradients or parameters for a single layer
  • abs_bound (float, optional) – Value to element-wise clip gradients or parameters. Defaults to None.
Returns:

Tensor of clipped gradients or parameters.

Return type:

v (tensor)

gen_class(pdict)
get_description(skip=[], **kwargs)

Returns a dict that contains all necessary information needed to serialize this object.

Parameters:skip (list) – Objects to omit from the dictionary.
Returns:Dictionary format for object information.
Return type:(dict)
modulenm

Returns the full module path.

optimize(layer_list, epoch)[source]

Apply the learning rule to all the layers and update the states.

Parameters:
  • layer_list (list) – a list of Layer objects to optimize.
  • epoch (int) – the current epoch, needed for the Schedule object.
recursive_gen(pdict, key)

helper method to check whether the definition dictionary is defining a NervanaObject child, if so it will instantiate that object and replace the dictionary element with an instance of that object