InitializersΒΆ

Each create layer with weights should be constructed with a provided initializer class. These classes define how the weights are initialized before training begins. Each class implements the fill(param) method to assign values to the input tensor param. Neon supports the following initializers:

Function Description
neon.initializers.Constant Initialize all tensors with a constant value val
neon.initializers.Array Initialize all tensors with array values val
neon.initializers.Uniform Uniform distribution from low to high
neon.initializers.Gaussian Gaussian distribution with mean loc and std. dev. scale
neon.initializers.GlorotUniform Uniform distribution from \(-k\) to \(k\), where \(k\) is scaled by the input dimensions (\(k = \sqrt{6/(d_{in} + d_{out})}\)), see Glorot, 2010
neon.initializers.Xavier Alternate form of Glorot where only the input dimension is used for scaling \(k = \sqrt{3/d_{in}}\))
neon.initializers.Kaiming Gaussian distribution with \(\mu = 0\) and \(\sigma = \sqrt{2/d_{in}}\)
neon.initializers.IdentityInit Fills with identity matrix
neon.initializers.Orthonormal Uses the singular value decomposition of a gaussian random matrix, scaled by a factor scale. (see Saxe, 2014)

In the above table, \(d_{in}\) and \(d_{out}\) refer to the input and output dimensions of the input tensor, respectively. Neon assumes that

d_in = param.shape[0]
d_out = param.shape[1]

Custom initialization schemes should subclass from neon.initializers.Initializer and implement

# Constructor to define any needed parameters
# (e.g. fill value, moments, name, etc.)
def __init__(self, myParam=0.0, name="myInitName"):

# Method to assign values to the input tensor `param`
def fill(self, param):