neon.layers.container.Sequential

class neon.layers.container.Sequential(layers, name=None)[source]

Bases: neon.layers.container.LayerContainer

Layer container that encapsulates a simple linear pathway of layers.

Parameters:layers (list) – List of objects which can be either a list of layers (including layer containers).
__init__(layers, name=None)[source]

Methods

__init__(layers[, name])
accumulates(f) Higher order decorator function that enables accumulation functionality for that function.
allocate([shared_outputs, accumulate_updates]) Allocate output buffer to store activations from fprop.
allocate_deltas([global_deltas])
bprop(error[, alpha, beta]) Apply the backward pass transformation to the input data.
configure(in_obj) Must receive a list of shapes for configuration (one for each pathway)
fprop(inputs[, inference, beta]) TODO: Handle final layers that don’t own their own outputs (bias, activation)
fusion_pass(layers) Groups patterns together in list.
gen_class(pdict)
get_description([get_weights, keep_states]) Get layer parameters.
get_is_mklop() is_mklop true means this op is on mkl backend
get_param_attrs()
get_terminal() Used for recursively getting final nodes from layer containers.
layers_bprop() Generator to iterator over the layers in the same
layers_fprop() Generator to iterator over the layers in the same
load_weights(pdict[, load_states]) Load weights.
nested_str([level]) Utility function for displaying layer info with a given indentation level.
propagate_parallelism(p)
recursive_gen(pdict, key) helper method to check whether the definition
revert_tensors()
serialize() Get state parameters for this layer.
set_acc_on(acc_on) Set the acc_on flag according to bool argument for each layer.
set_batch_size(N) Set minibatch size.
set_deltas(global_deltas) Set the layer deltas from the shared
set_is_mklop()
set_next(layer) Set next_layer to provided layer.
set_not_mklop()
set_params(pdict)
set_seq_len(S) Set sequence length.
set_states(pdict)
accumulates(f)

Higher order decorator function that enables accumulation functionality for that function. Object that use this decorator are required to have an acc_param attribute. This attribute tuple declares the names for existing temp parameter and real parameter buffers. The temp parameter buffer copies the value of the parameter buffer before f is called, and after f is called the temp and normal buffers are summed. This decorator could be used to wrap any function that may want to accumulate parameters instead of overwriting.

allocate(shared_outputs=None, accumulate_updates=False)[source]

Allocate output buffer to store activations from fprop.

Parameters:shared_outputs (Tensor, optional) – pre-allocated tensor for activations to be computed into
allocate_deltas(global_deltas=None)[source]
be = None
bprop(error, alpha=1.0, beta=0.0)[source]

Apply the backward pass transformation to the input data.

Parameters:
  • error (Tensor) – deltas back propagated from the adjacent higher layer
  • alpha (float, optional) – scale to apply to input for activation gradient bprop. Defaults to 1.0
  • beta (float, optional) – scale to apply to output activation gradient bprop. Defaults to 0.0
Returns:

deltas to propagate to the adjacent lower layer

Return type:

Tensor

classnm

Returns the class name.

configure(in_obj)[source]

Must receive a list of shapes for configuration (one for each pathway) the shapes correspond to the layer_container attribute

Parameters:in_obj – any object that has an out_shape (Layer) or shape (Tensor, dataset)
fprop(inputs, inference=False, beta=0.0)[source]

TODO: Handle final layers that don’t own their own outputs (bias, activation)

Parameters:
  • inputs
  • inference – (Default value = False)
  • beta – (Default value = 0.0)

Returns:

fusion_pass(layers)

Groups patterns together in list. If pattern is [a, b], will transform [a, b, c, d, a, b, e] -> [[a, b], c, d, [a, b], e]. Support for multiple patterns.

gen_class(pdict)
get_description(get_weights=False, keep_states=False)

Get layer parameters. All parameters are needed for optimization, but only weights are serialized.

Parameters:
  • get_weights (bool, optional) – Control whether all parameters are returned or just weights for serialization.
  • keep_states (bool, optional) – Control whether all parameters are returned or just weights for serialization.
get_is_mklop()

is_mklop true means this op is on mkl backend and may require convert when from non-mkl op

get_param_attrs()
get_terminal()[source]

Used for recursively getting final nodes from layer containers.

layers_bprop()

Generator to iterator over the layers in the same order as bprop

layers_fprop()

Generator to iterator over the layers in the same order as fprop

layers_to_optimize
load_weights(pdict, load_states=True)

Load weights.

Parameters:
  • pdict
  • load_states – (Default value = True)

Returns:

modulenm

Returns the full module path.

nest_deltas
nested_str(level=0)

Utility function for displaying layer info with a given indentation level.

Parameters:level (int, optional) – indentation level
Returns:layer info at the given indentation level
Return type:str
propagate_parallelism(p)
recursive_gen(pdict, key)

helper method to check whether the definition dictionary is defining a NervanaObject child, if so it will instantiate that object and replace the dictionary element with an instance of that object

revert_tensors()
serialize()

Get state parameters for this layer.

Returns:whatever data this model wants to receive in order to restore state
Return type:varies
set_acc_on(acc_on)

Set the acc_on flag according to bool argument for each layer. If a layer in the container does not support accumulate_updates it will be skipped.

Parameters:acc_on (bool) – Value to set the acc_on flag of supported layers to.
set_batch_size(N)

Set minibatch size.

Parameters:N (int) – minibatch size
set_deltas(global_deltas)

Set the layer deltas from the shared global deltas pool

set_is_mklop()
set_next(layer)

Set next_layer to provided layer.

Parameters:layer (layer) – Next layer
set_not_mklop()
set_params(pdict)
set_seq_len(S)

Set sequence length.

Parameters:S (int) – sequence length
set_states(pdict)