neon.optimizers.optimizer.Adam

class neon.optimizers.optimizer.Adam(stochastic_round=False, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, gradient_clip_norm=None, gradient_clip_value=None, param_clip_value=None, name='adam')[source]

Bases: neon.optimizers.optimizer.Optimizer

Adam optimizer.

The Adam optimizer combines features from RMSprop and Adagrad. We accumulate both the first and second moments of the gradient with decay rates \(\beta_1\) and \(\beta_2\) corresponding to window sizes of \(1/\beta_1\) and \(1/\beta_2\), respectively.

\[m' &= \beta_1 m + (1-\beta_1) \nabla J\]
\[v' &= \beta_2 v + (1-\beta_2) (\nabla J)^2\]

We update the parameters by the ratio of the two moments:

\[\theta = \theta - \alpha \frac{\hat{m}'}{\sqrt{\hat{v}'}+\epsilon}\]

where we compute the bias-corrected moments \(\hat{m}'\) and \(\hat{v}'\) via

\[\hat{m}' &= m'/(1-\beta_1^t)\]
\[\hat{v}' &= v'/(1-\beta_1^t)\]

Example usage:

from neon.optimizers import Adam

# use Adam
optimizer = Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
__init__(stochastic_round=False, learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, gradient_clip_norm=None, gradient_clip_value=None, param_clip_value=None, name='adam')[source]

Class constructor.

Parameters:
  • stochastic_round (bool) – Set this to True for stochastic rounding. If False rounding will be to nearest. If True will perform stochastic rounding using default width. Only affects the gpu backend.
  • learning_rate (float) – the multiplicative coefficient of updates
  • beta_1 (float) – Adam parameter beta1
  • beta_2 (float) – Adam parameter beta2
  • epsilon (float) – numerical stability parameter
  • gradient_clip_norm (float, optional) – Target gradient norm. Defaults to None.
  • gradient_clip_value (float, optional) – Value to element-wise clip gradients. Defaults to None.
  • param_clip_value (float, optional) – Value to element-wise clip parameters. Defaults to None.

Methods

__init__([stochastic_round, learning_rate, …]) Class constructor.
clip_gradient_norm(param_list, clip_norm) Returns a scaling factor to apply to the gradients.
clip_value(v[, abs_bound]) Element-wise clip a gradient or parameter tensor to between -abs_bound and +abs_bound.
gen_class(pdict)
get_description([skip]) Returns a dict that contains all necessary information needed to serialize this object.
optimize(layer_list, epoch) Apply the learning rule to all the layers and update the states.
recursive_gen(pdict, key) helper method to check whether the definition
be = None
classnm

Returns the class name.

clip_gradient_norm(param_list, clip_norm)

Returns a scaling factor to apply to the gradients.

The scaling factor is computed such that the root mean squared average of the scaled gradients across all layers will be less than or equal to the provided clip_norm value. This factor is always <1, so never scales up the gradients.

Parameters:
  • param_list (list) – List of layer parameters
  • clip_norm (float, optional) – Target norm for the gradients. If not provided the returned scale_factor will equal 1.
Returns:

Computed scale factor.

Return type:

scale_factor (float)

clip_value(v, abs_bound=None)

Element-wise clip a gradient or parameter tensor to between -abs_bound and +abs_bound.

Parameters:
  • v (tensor) – Tensor of gradients or parameters for a single layer
  • abs_bound (float, optional) – Value to element-wise clip gradients or parameters. Defaults to None.
Returns:

Tensor of clipped gradients or parameters.

Return type:

v (tensor)

gen_class(pdict)
get_description(skip=[], **kwargs)

Returns a dict that contains all necessary information needed to serialize this object.

Parameters:skip (list) – Objects to omit from the dictionary.
Returns:Dictionary format for object information.
Return type:(dict)
modulenm

Returns the full module path.

optimize(layer_list, epoch)[source]

Apply the learning rule to all the layers and update the states.

Parameters:
  • param_list (list) – a list of tuples of the form ((param, grad), state), corresponding to parameters, grads, and states of layers to be updated
  • epoch (int) – the current epoch, needed for the Schedule object.
recursive_gen(pdict, key)

helper method to check whether the definition dictionary is defining a NervanaObject child, if so it will instantiate that object and replace the dictionary element with an instance of that object