API - Initializers

To make TensorLayer simple, TensorLayer only warps some basic initializers. For more advanced initializer, e.g. tf.initializers.he_normal, please refer to TensorFlow provided initializers here.

Initializer Initializer base class: all initializers inherit from this class.
Zeros Initializer that generates tensors initialized to 0.
Ones Initializer that generates tensors initialized to 1.
Constant([value]) Initializer that generates tensors initialized to a constant value.
RandomUniform([minval, maxval, seed]) Initializer that generates tensors with a uniform distribution.
RandomNormal([mean, stddev, seed]) Initializer that generates tensors with a normal distribution.
TruncatedNormal([mean, stddev, seed]) Initializer that generates a truncated normal distribution.
deconv2d_bilinear_upsampling_initializer(shape) Returns the initializer that can be passed to DeConv2dLayer for initializing the weights in correspondence to channel-wise bilinear up-sampling.

Initializer

class tensorlayer.initializers.Initializer[source]

Initializer base class: all initializers inherit from this class.

Zeros

class tensorlayer.initializers.Zeros[source]

Initializer that generates tensors initialized to 0.

Ones

class tensorlayer.initializers.Ones[source]

Initializer that generates tensors initialized to 1.

Constant

class tensorlayer.initializers.Constant(value=0)[source]

Initializer that generates tensors initialized to a constant value.

Parameters:value (A python scalar or a numpy array.) – The assigned value.

RandomUniform

class tensorlayer.initializers.RandomUniform(minval=-0.05, maxval=0.05, seed=None)[source]

Initializer that generates tensors with a uniform distribution.

Parameters:
  • minval (A python scalar or a scalar tensor.) – Lower bound of the range of random values to generate.
  • maxval (A python scalar or a scalar tensor.) – Upper bound of the range of random values to generate.
  • seed (A Python integer.) – Used to seed the random generator.

RandomNormal

class tensorlayer.initializers.RandomNormal(mean=0.0, stddev=0.05, seed=None)[source]

Initializer that generates tensors with a normal distribution.

Parameters:
  • mean (A python scalar or a scalar tensor.) – Mean of the random values to generate.
  • stddev (A python scalar or a scalar tensor.) – Standard deviation of the random values to generate.
  • seed (A Python integer.) – Used to seed the random generator.

TruncatedNormal

class tensorlayer.initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=None)[source]

Initializer that generates a truncated normal distribution.

These values are similar to values from a RandomNormal except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.

Parameters:
  • mean (A python scalar or a scalar tensor.) – Mean of the random values to generate.
  • stddev (A python scalar or a scalar tensor.) – Standard deviation of the andom values to generate.
  • seed (A Python integer.) – Used to seed the random generator.

deconv2d_bilinear_upsampling_initializer

tensorlayer.initializers.deconv2d_bilinear_upsampling_initializer(shape)[source]

Returns the initializer that can be passed to DeConv2dLayer for initializing the weights in correspondence to channel-wise bilinear up-sampling. Used in segmentation approaches such as [FCN](https://arxiv.org/abs/1605.06211)

Parameters:shape (tuple of int) – The shape of the filters, [height, width, output_channels, in_channels]. It must match the shape passed to DeConv2dLayer.
Returns:A constant initializer with weights set to correspond to per channel bilinear upsampling when passed as W_int in DeConv2dLayer
Return type:tf.constant_initializer

Examples

Upsampling by a factor of 2, ie e.g 100->200 >>> import tensorflow as tf >>> import tensorlayer as tl >>> rescale_factor = 2 >>> imsize = 128 >>> num_channels = 3 >>> num_in_channels = 3 >>> num_out_channels = 3 >>> filter_shape = (5, 5, num_out_channels, num_in_channels) >>> ni = tl.layers.Input(shape=(1, imsize, imsize, num_channels)) >>> bilinear_init = deconv2d_bilinear_upsampling_initializer(shape=filter_shape) >>> net = tl.layers.DeConv2dLayer( … shape=filter_shape, … outputs_shape=(1, imsize*rescale_factor, imsize*rescale_factor, num_out_channels), … strides=(1, rescale_factor, rescale_factor, 1), … W_init=bilinear_init, … padding=’SAME’, … act=None, name=’g/h1/decon2d’)(ni)