API - Layers

To make TensorLayer simple, we minimize the number of layer classes as much as we can. So we encourage you to use TensorFlow’s function. For example, we provide layer for local response normalization, but user can still apply tf.nn.lrn on network.outputs. More functions can be found in TensorFlow API.

Understand Basic layer

All TensorLayer layers have a number of properties in common:

  • layer.outputs : a Tensor, the outputs of current layer.
  • layer.all_params : a list of Tensor, all network variables in order.
  • layer.all_layers : a list of Tensor, all network outputs in order.
  • layer.all_drop : a dictionary of {placeholder : float}, all keeping probabilities of noise layer.

All TensorLayer layers have a number of methods in common:

  • layer.print_params() : print the network variables information in order (after tl.layers.initialize_global_variables(sess)). alternatively, print all variables by tl.layers.print_all_variables().
  • layer.print_layers() : print the network layers information in order.
  • layer.count_params() : print the number of parameters in the network.

The initialization of a network is done by input layer, then we can stacked layers as follow, a network is a Layer class. The most important properties of a network are network.all_params, network.all_layers and network.all_drop. The all_params is a list which store all pointers of all network parameters in order, the following script define a 3 layer network, then:

all_params = [W1, b1, W2, b2, W_out, b_out]

To get specified variables, you can use network.all_params[2:3] or get_variables_with_name(). As the all_layers is a list which store all pointers of the outputs of all layers, in the following network:

all_layers = [drop(?,784), relu(?,800), drop(?,800), relu(?,800), drop(?,800)], identity(?,10)]

where ? reflects any batch size. You can print the layer information and parameters information by using network.print_layers() and network.print_params(). To count the number of parameters in a network, run network.count_params().

sess = tf.InteractiveSession()

x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')

network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network = tl.layers.DenseLayer(network, n_units=800,
                                act = tf.nn.relu, name='relu1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800,
                                act = tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
network = tl.layers.DenseLayer(network, n_units=10,
                                act = tl.activation.identity,
                                name='output_layer')

y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)

cost = tl.cost.cross_entropy(y, y_)

train_params = network.all_params

train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
                            epsilon=1e-08, use_locking=False).minimize(cost, var_list = train_params)

tl.layers.initialize_global_variables(sess)

network.print_params()
network.print_layers()

In addition, network.all_drop is a dictionary which stores the keeping probabilities of all noise layer. In the above network, they are the keeping probabilities of dropout layers.

So for training, enable all dropout layers as follow.

feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop )
loss, _ = sess.run([cost, train_op], feed_dict=feed_dict)
feed_dict.update( network.all_drop )

For evaluating and testing, disable all dropout layers as follow.

feed_dict = {x: X_val, y_: y_val}
feed_dict.update(dp_dict)
print("   val loss: %f" % sess.run(cost, feed_dict=feed_dict))
print("   val acc: %f" % np.mean(y_val ==
                        sess.run(y_op, feed_dict=feed_dict)))

For more details, please read the MNIST examples on Github.

Customized layer

A Simple layer

To implement a custom layer in TensorLayer, you will have to write a Python class that subclasses Layer and implement the outputs expression.

The following is an example implementation of a layer that multiplies its input by 2:

class DoubleLayer(Layer):
    def __init__(
        self,
        layer = None,
        name ='double_layer',
    ):
        # check layer name (fixed)
        Layer.__init__(self, name=name)

        # the input of this layer is the output of previous layer (fixed)
        self.inputs = layer.outputs

        # operation (customized)
        self.outputs = self.inputs * 2

        # get stuff from previous layer (fixed)
        self.all_layers = list(layer.all_layers)
        self.all_params = list(layer.all_params)
        self.all_drop = dict(layer.all_drop)

        # update layer (customized)
        self.all_layers.extend( [self.outputs] )

Your Dense layer

Before creating your own TensorLayer layer, let’s have a look at Dense layer. It creates a weights matrix and biases vector if not exists, then implement the output expression. At the end, as a layer with parameter, we also need to append the parameters into all_params.

class MyDenseLayer(Layer):
  def __init__(
      self,
      layer = None,
      n_units = 100,
      act = tf.nn.relu,
      name ='simple_dense',
  ):
      # check layer name (fixed)
      Layer.__init__(self, name=name)

      # the input of this layer is the output of previous layer (fixed)
      self.inputs = layer.outputs

      # print out info (customized)
      print("  MyDenseLayer %s: %d, %s" % (self.name, n_units, act))

      # operation (customized)
      n_in = int(self.inputs._shape[-1])
      with tf.variable_scope(name) as vs:
          # create new parameters
          W = tf.get_variable(name='W', shape=(n_in, n_units))
          b = tf.get_variable(name='b', shape=(n_units))
          # tensor operation
          self.outputs = act(tf.matmul(self.inputs, W) + b)

      # get stuff from previous layer (fixed)
      self.all_layers = list(layer.all_layers)
      self.all_params = list(layer.all_params)
      self.all_drop = dict(layer.all_drop)

      # update layer (customized)
      self.all_layers.extend( [self.outputs] )
      self.all_params.extend( [W, b] )

Modifying Pre-train Behaviour

Greedy layer-wise pretraining is an important task for deep neural network initialization, while there are many kinds of pre-training methods according to different network architectures and applications.

For example, the pre-train process of Vanilla Sparse Autoencoder can be implemented by using KL divergence (for sigmoid) as the following code, but for Deep Rectifier Network, the sparsity can be implemented by using the L1 regularization of activation output.

# Vanilla Sparse Autoencoder
beta = 4
rho = 0.15
p_hat = tf.reduce_mean(activation_out, reduction_indices = 0)
KLD = beta * tf.reduce_sum( rho * tf.log(tf.div(rho, p_hat))
        + (1- rho) * tf.log((1- rho)/ (tf.sub(float(1), p_hat))) )

There are many pre-train methods, for this reason, TensorLayer provides a simple way to modify or design your own pre-train method. For Autoencoder, TensorLayer uses ReconLayer.__init__() to define the reconstruction layer and cost function, to define your own cost function, just simply modify the self.cost in ReconLayer.__init__(). To creat your own cost expression please read Tensorflow Math. By default, ReconLayer only updates the weights and biases of previous 1 layer by using self.train_params = self.all _params[-4:], where the 4 parameters are [W_encoder, b_encoder, W_decoder, b_decoder], where W_encoder, b_encoder belong to previous DenseLayer, W_decoder, b_decoder belong to this ReconLayer. In addition, if you want to update the parameters of previous 2 layers at the same time, simply modify [-4:] to [-6:].

ReconLayer.__init__(...):
    ...
    self.train_params = self.all_params[-4:]
    ...
      self.cost = mse + L1_a + L2_w

Layer list

get_variables_with_name(name[, train_only, …]) Get variable list by a given name scope.
get_layers_with_name([network, name, printable]) Get layer list in a network by a given name scope.
set_name_reuse([enable]) Enable or disable reuse layer name.
print_all_variables([train_only]) Print all trainable and non-trainable variables without tl.layers.initialize_global_variables(sess)
initialize_global_variables([sess]) Excute sess.run(tf.global_variables_initializer()) for TF 0.12+ or sess.run(tf.initialize_all_variables()) for TF 0.11.
Layer([inputs, name]) The Layer class represents a single layer of a neural network.
InputLayer([inputs, name]) The InputLayer class is the starting layer of a neural network.
OneHotInputLayer([inputs, depth, on_value, …]) The OneHotInputLayer class is the starting layer of a neural network, see tf.one_hot.
Word2vecEmbeddingInputlayer([inputs, …]) The Word2vecEmbeddingInputlayer class is a fully connected layer, for Word Embedding.
EmbeddingInputlayer([inputs, …]) The EmbeddingInputlayer class is a fully connected layer, for Word Embedding.
AverageEmbeddingInputlayer(inputs, …[, …]) The AverageEmbeddingInputlayer averages over embeddings of inputs, can be used as the input layer for models like DAN[1] and FastText[2].
DenseLayer([layer, n_units, act, W_init, …]) The DenseLayer class is a fully connected layer.
ReconLayer([layer, x_recon, name, n_units, act]) The ReconLayer class is a reconstruction layer DenseLayer which use to pre-train a DenseLayer.
DropoutLayer([layer, keep, is_fix, …]) The DropoutLayer class is a noise layer which randomly set some values to zero by a given keeping probability.
GaussianNoiseLayer([layer, mean, stddev, …]) The GaussianNoiseLayer class is noise layer that adding noise with normal distribution to the activation.
DropconnectDenseLayer([layer, keep, …]) The DropconnectDenseLayer class is DenseLayer with DropConnect behaviour which randomly remove connection between this layer to previous layer by a given keeping probability.
Conv1dLayer([layer, act, shape, stride, …]) The Conv1dLayer class is a 1D CNN layer, see tf.nn.convolution.
Conv2dLayer([layer, act, shape, strides, …]) The Conv2dLayer class is a 2D CNN layer, see tf.nn.conv2d.
DeConv2dLayer([layer, act, shape, …]) The DeConv2dLayer class is deconvolutional 2D layer, see tf.nn.conv2d_transpose.
Conv3dLayer([layer, act, shape, strides, …]) The Conv3dLayer class is a 3D CNN layer, see tf.nn.conv3d.
DeConv3dLayer([layer, act, shape, …]) The DeConv3dLayer class is deconvolutional 3D layer, see tf.nn.conv3d_transpose.
PoolLayer([layer, ksize, strides, padding, …]) The PoolLayer class is a Pooling layer, you can choose tf.nn.max_pool and tf.nn.avg_pool for 2D or tf.nn.max_pool3d and tf.nn.avg_pool3d for 3D.
PadLayer([layer, paddings, mode, name]) The PadLayer class is a Padding layer for any modes and dimensions.
UpSampling2dLayer([layer, size, is_scale, …]) The UpSampling2dLayer class is upSampling 2d layer, see tf.image.resize_images.
DownSampling2dLayer([layer, size, is_scale, …]) The DownSampling2dLayer class is downSampling 2d layer, see tf.image.resize_images.
DeformableConv2dLayer([layer, act, …]) The DeformableConv2dLayer class is a Deformable Convolutional Networks .
AtrousConv1dLayer(net[, n_filter, …]) Wrapper for AtrousConv1dLayer, if you don’t understand how to use Conv1dLayer, this function may be easier.
AtrousConv2dLayer([layer, n_filter, …]) The AtrousConv2dLayer class is Atrous convolution (a.k.a.
Conv1d(net[, n_filter, filter_size, stride, …]) Wrapper for Conv1dLayer, if you don’t understand how to use Conv1dLayer, this function may be easier.
Conv2d(net[, n_filter, filter_size, …]) Wrapper for Conv2dLayer, if you don’t understand how to use Conv2dLayer, this function may be easier.
DeConv2d(net[, n_out_channel, filter_size, …]) Wrapper for DeConv2dLayer, if you don’t understand how to use DeConv2dLayer, this function may be easier.
MaxPool1d(net, filter_size, strides[, …]) Wrapper for tf.layers.max_pooling1d .
MeanPool1d(net, filter_size, strides[, …]) Wrapper for tf.layers.average_pooling1d .
MaxPool2d(net[, filter_size, strides, …]) Wrapper for PoolLayer.
MeanPool2d(net[, filter_size, strides, …]) Wrapper for PoolLayer.
MaxPool3d(net, filter_size, strides[, …]) Wrapper for tf.layers.max_pooling3d .
MeanPool3d(net, filter_size, strides[, …]) Wrapper for tf.layers.average_pooling3d
DepthwiseConv2d([layer, channel_multiplier, …]) Separable/Depthwise Convolutional 2D, see tf.nn.depthwise_conv2d.
SubpixelConv1d(net[, scale, act, name]) One-dimensional subpixel upsampling layer.
SubpixelConv2d(net[, scale, n_out_channel, …]) It is a sub-pixel 2d upsampling layer, usually be used for Super-Resolution applications, see example code.
SpatialTransformer2dAffineLayer([layer, …]) The SpatialTransformer2dAffineLayer class is a Spatial Transformer Layer for 2D Affine Transformation.
transformer(U, theta, out_size[, name]) Spatial Transformer Layer for 2D Affine Transformation , see SpatialTransformer2dAffineLayer class.
batch_transformer(U, thetas, out_size[, name]) Batch Spatial Transformer function for 2D Affine Transformation.
BatchNormLayer([layer, decay, epsilon, act, …]) The BatchNormLayer class is a normalization layer, see tf.nn.batch_normalization and tf.nn.moments.
LocalResponseNormLayer([layer, …]) The LocalResponseNormLayer class is for Local Response Normalization, see tf.nn.local_response_normalization or tf.nn.lrn for new TF version.
InstanceNormLayer([layer, act, epsilon, …]) The InstanceNormLayer class is a for instance normalization.
LayerNormLayer([layer, center, scale, act, …]) The LayerNormLayer class is for layer normalization, see tf.contrib.layers.layer_norm.
ROIPoolingLayer([layer, rois, pool_height, …]) The ROIPoolingLayer class is Region of interest pooling layer.
TimeDistributedLayer([layer, layer_class, …]) The TimeDistributedLayer class that applies a function to every timestep of the input tensor.
RNNLayer([layer, cell_fn, cell_init_args, …]) The RNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.
BiRNNLayer([layer, cell_fn, cell_init_args, …]) The BiRNNLayer class is a Bidirectional RNN layer.
ConvRNNCell Abstract object representing an Convolutional RNN Cell.
BasicConvLSTMCell(shape, filter_size, …[, …]) Basic Conv LSTM recurrent network cell.
ConvLSTMLayer([layer, cell_shape, …]) The ConvLSTMLayer class is a Convolutional LSTM layer, see Convolutional LSTM Layer .
advanced_indexing_op(input, index) Advanced Indexing for Sequences, returns the outputs by given sequence lengths.
retrieve_seq_length_op(data) An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros.
retrieve_seq_length_op2(data) An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros.
DynamicRNNLayer([layer, cell_fn, …]) The DynamicRNNLayer class is a Dynamic RNN layer, see tf.nn.dynamic_rnn.
BiDynamicRNNLayer([layer, cell_fn, …]) The BiDynamicRNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.
Seq2Seq([net_encode_in, net_decode_in, …]) The Seq2Seq class is a Simple DynamicRNNLayer based Seq2seq layer without using tl.contrib.seq2seq.
PeekySeq2Seq([net_encode_in, net_decode_in, …]) Waiting for contribution.
AttentionSeq2Seq([net_encode_in, …]) Waiting for contribution.
FlattenLayer([layer, name]) The FlattenLayer class is layer which reshape high-dimension input to a vector.
ReshapeLayer([layer, shape, name]) The ReshapeLayer class is layer which reshape the tensor.
TransposeLayer([layer, perm, name]) The TransposeLayer class transpose the dimension of a teneor, see tf.transpose() .
LambdaLayer([layer, fn, fn_args, name]) The LambdaLayer class is a layer which is able to use the provided function.
ConcatLayer([layer, concat_dim, name]) The ConcatLayer class is layer which concat (merge) two or more tensor by given axis..
ElementwiseLayer([layer, combine_fn, name]) The ElementwiseLayer class combines multiple Layer which have the same output shapes by a given elemwise-wise operation.
ExpandDimsLayer([layer, axis, name]) The ExpandDimsLayer class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() .
TileLayer([layer, multiples, name]) The TileLayer class constructs a tensor by tiling a given tensor, see tf.tile() .
StackLayer([layer, axis, name]) The StackLayer class is layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().
UnStackLayer([layer, num, axis, name]) The UnStackLayer is layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().
EstimatorLayer([layer, model_fn, args, name]) The EstimatorLayer class accepts model_fn that described the model.
SlimNetsLayer([layer, slim_layer, …]) The SlimNetsLayer class can be used to merge all TF-Slim nets into TensorLayer.
KerasLayer([layer, keras_layer, keras_args, …]) The KerasLayer class can be used to merge all Keras layers into TensorLayer.
PReluLayer([layer, channel_shared, a_init, …]) The PReluLayer class is Parametric Rectified Linear layer.
MultiplexerLayer([layer, name]) The MultiplexerLayer selects one of several input and forwards the selected input into the output, see tutorial_mnist_multiplexer.py.
EmbeddingAttentionSeq2seqWrapper(…[, …]) Sequence-to-sequence model with attention and for multiple buckets (Deprecated after TF0.12).
flatten_reshape(variable[, name]) Reshapes high-dimension input to a vector.
clear_layers_name() Clear all layer names in set_keep[‘_layers_name_list’], enable layer name reuse.
initialize_rnn_state(state[, feed_dict]) Returns the initialized RNN state.
list_remove_repeat([l]) Remove the repeated items in a list, and return the processed list.
merge_networks([layers]) Merge all parameters, layers and dropout probabilities to a Layer.

Name Scope and Sharing Parameters

These functions help you to reuse parameters for different inference (graph), and get a list of parameters by given name. About TensorFlow parameters sharing click here.

Get variables with name

tensorlayer.layers.get_variables_with_name(name, train_only=True, printable=False)[source]

Get variable list by a given name scope.

Examples

>>> dense_vars = tl.layers.get_variable_with_name('dense', True, True)

Get layers with name

tensorlayer.layers.get_layers_with_name(network=None, name='', printable=False)[source]

Get layer list in a network by a given name scope.

Examples

>>> layers = tl.layers.get_layers_with_name(network, "CNN", True)

Enable layer name reuse

tensorlayer.layers.set_name_reuse(enable=True)[source]

Enable or disable reuse layer name. By default, each layer must has unique name. When you want two or more input placeholder (inference) share the same model parameters, you need to enable layer name reuse, then allow the parameters have same name scope.

Parameters:
enable : boolean, enable name reuse. (None means False).

Examples

>>> def embed_seq(input_seqs, is_train, reuse):
>>>    with tf.variable_scope("model", reuse=reuse):
>>>         tl.layers.set_name_reuse(reuse)
>>>         network = tl.layers.EmbeddingInputlayer(
...                     inputs = input_seqs,
...                     vocabulary_size = vocab_size,
...                     embedding_size = embedding_size,
...                     name = 'e_embedding')
>>>        network = tl.layers.DynamicRNNLayer(network,
...                     cell_fn = tf.contrib.rnn.BasicLSTMCell,
...                     n_hidden = embedding_size,
...                     dropout = (0.7 if is_train else None),
...                     initializer = w_init,
...                     sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs),
...                     return_last = True,
...                     name = 'e_dynamicrnn')
>>>    return network
>>>
>>> net_train = embed_seq(t_caption, is_train=True, reuse=False)
>>> net_test = embed_seq(t_caption, is_train=False, reuse=True)
  • see tutorial_ptb_lstm.py for example.

Initialize variables

tensorlayer.layers.initialize_global_variables(sess=None)[source]

Excute sess.run(tf.global_variables_initializer()) for TF 0.12+ or sess.run(tf.initialize_all_variables()) for TF 0.11.

Parameters:
sess : a Session

Basic layer

class tensorlayer.layers.Layer(inputs=None, name='layer')[source]

The Layer class represents a single layer of a neural network. It should be subclassed when implementing new types of layers. Because each layer can keep track of the layer(s) feeding into it, a network’s output Layer instance can double as a handle to the full network.

Parameters:
inputs : a Layer instance

The Layer class feeding into this layer.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Input layer

class tensorlayer.layers.InputLayer(inputs=None, name='input_layer')[source]

The InputLayer class is the starting layer of a neural network.

Parameters:
inputs : a placeholder or tensor

The input tensor data.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

One-hot layer

class tensorlayer.layers.OneHotInputLayer(inputs=None, depth=None, on_value=None, off_value=None, axis=None, dtype=None, name='input_layer')[source]

The OneHotInputLayer class is the starting layer of a neural network, see tf.one_hot.

Parameters:
inputs : a placeholder or tensor

The input tensor data.

name : a string or None

An optional name to attach to this layer.

depth : If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).
on_value : If on_value is not provided, it will default to the value 1 with type dtype.

default, None

off_value : If off_value is not provided, it will default to the value 0 with type dtype.

default, None

axis : default, None
dtype : default, None

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Word Embedding Input layer

Word2vec layer for training

class tensorlayer.layers.Word2vecEmbeddingInputlayer(inputs=None, train_labels=None, vocabulary_size=80000, embedding_size=200, num_sampled=64, nce_loss_args={}, E_init=<tensorflow.python.ops.init_ops.RandomUniform object>, E_init_args={}, nce_W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, nce_W_init_args={}, nce_b_init=<tensorflow.python.ops.init_ops.Constant object>, nce_b_init_args={}, name='word2vec_layer')[source]

The Word2vecEmbeddingInputlayer class is a fully connected layer, for Word Embedding. Words are input as integer index. The output is the embedded word vector.

Parameters:
inputs : placeholder

For word inputs. integer index format.

train_labels : placeholder

For word labels. integer index format.

vocabulary_size : int

The size of vocabulary, number of words.

embedding_size : int

The number of embedding dimensions.

num_sampled : int

The Number of negative examples for NCE loss.

nce_loss_args : a dictionary

The arguments for tf.nn.nce_loss()

E_init : embedding initializer

The initializer for initializing the embedding matrix.

E_init_args : a dictionary

The arguments for embedding initializer

nce_W_init : NCE decoder biases initializer

The initializer for initializing the nce decoder weight matrix.

nce_W_init_args : a dictionary

The arguments for initializing the nce decoder weight matrix.

nce_b_init : NCE decoder biases initializer

The initializer for tf.get_variable() of the nce decoder bias vector.

nce_b_init_args : a dictionary

The arguments for tf.get_variable() of the nce decoder bias vector.

name : a string or None

An optional name to attach to this layer.

References

Examples

  • Without TensorLayer : see tensorflow/examples/tutorials/word2vec/word2vec_basic.py
>>> train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
>>> train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
>>> embeddings = tf.Variable(
...     tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
>>> embed = tf.nn.embedding_lookup(embeddings, train_inputs)
>>> nce_weights = tf.Variable(
...     tf.truncated_normal([vocabulary_size, embedding_size],
...                    stddev=1.0 / math.sqrt(embedding_size)))
>>> nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
>>> cost = tf.reduce_mean(
...    tf.nn.nce_loss(weights=nce_weights, biases=nce_biases,
...               inputs=embed, labels=train_labels,
...               num_sampled=num_sampled, num_classes=vocabulary_size,
...               num_true=1))
  • With TensorLayer : see tutorial_word2vec_basic.py
>>> train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
>>> train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
>>> emb_net = tl.layers.Word2vecEmbeddingInputlayer(
...         inputs = train_inputs,
...         train_labels = train_labels,
...         vocabulary_size = vocabulary_size,
...         embedding_size = embedding_size,
...         num_sampled = num_sampled,
...        name ='word2vec_layer',
...    )
>>> cost = emb_net.nce_cost
>>> train_params = emb_net.all_params
>>> train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(
...                                             cost, var_list=train_params)
>>> normalized_embeddings = emb_net.normalized_embeddings
Attributes:
nce_cost : a tensor

The NCE loss.

outputs : a tensor

The outputs of embedding layer.

normalized_embeddings : tensor

Normalized embedding matrix

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Embedding Input layer

class tensorlayer.layers.EmbeddingInputlayer(inputs=None, vocabulary_size=80000, embedding_size=200, E_init=<tensorflow.python.ops.init_ops.RandomUniform object>, E_init_args={}, name='embedding_layer')[source]

The EmbeddingInputlayer class is a fully connected layer, for Word Embedding. Words are input as integer index. The output is the embedded word vector.

If you have a pre-train matrix, you can assign the matrix into it. To train a word embedding matrix, you can used class:Word2vecEmbeddingInputlayer.

Note that, do not update this embedding matrix.

Parameters:
inputs : placeholder

For word inputs. integer index format. a 2D tensor : [batch_size, num_steps(num_words)]

vocabulary_size : int

The size of vocabulary, number of words.

embedding_size : int

The number of embedding dimensions.

E_init : embedding initializer

The initializer for initializing the embedding matrix.

E_init_args : a dictionary

The arguments for embedding initializer

name : a string or None

An optional name to attach to this layer.

Examples

>>> vocabulary_size = 50000
>>> embedding_size = 200
>>> model_file_name = "model_word2vec_50k_200"
>>> batch_size = None
...
>>> all_var = tl.files.load_npy_to_any(name=model_file_name+'.npy')
>>> data = all_var['data']; count = all_var['count']
>>> dictionary = all_var['dictionary']
>>> reverse_dictionary = all_var['reverse_dictionary']
>>> tl.files.save_vocab(count, name='vocab_'+model_file_name+'.txt')
>>> del all_var, data, count
...
>>> load_params = tl.files.load_npz(name=model_file_name+'.npz')
>>> x = tf.placeholder(tf.int32, shape=[batch_size])
>>> y_ = tf.placeholder(tf.int32, shape=[batch_size, 1])
>>> emb_net = tl.layers.EmbeddingInputlayer(
...                inputs = x,
...                vocabulary_size = vocabulary_size,
...                embedding_size = embedding_size,
...                name ='embedding_layer')
>>> tl.layers.initialize_global_variables(sess)
>>> tl.files.assign_params(sess, [load_params[0]], emb_net)
>>> word = b'hello'
>>> word_id = dictionary[word]
>>> print('word_id:', word_id)
... 6428
...
>>> words = [b'i', b'am', b'hao', b'dong']
>>> word_ids = tl.files.words_to_word_ids(words, dictionary)
>>> context = tl.files.word_ids_to_words(word_ids, reverse_dictionary)
>>> print('word_ids:', word_ids)
... [72, 1226, 46744, 20048]
>>> print('context:', context)
... [b'i', b'am', b'hao', b'dong']
...
>>> vector = sess.run(emb_net.outputs, feed_dict={x : [word_id]})
>>> print('vector:', vector.shape)
... (1, 200)
>>> vectors = sess.run(emb_net.outputs, feed_dict={x : word_ids})
>>> print('vectors:', vectors.shape)
... (4, 200)
Attributes:
outputs : a tensor

The outputs of embedding layer. the outputs 3D tensor : [batch_size, num_steps(num_words), embedding_size]

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Average Embedding Input layer

class tensorlayer.layers.AverageEmbeddingInputlayer(inputs, vocabulary_size, embedding_size, pad_value=0, name='average_embedding_layer', embeddings_initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, embeddings_kwargs=None)[source]

The AverageEmbeddingInputlayer averages over embeddings of inputs, can be used as the input layer for models like DAN[1] and FastText[2].

Parameters:
inputs : input placeholder or tensor
vocabulary_size : an integer, the size of vocabulary
embedding_size : an integer, the dimension of embedding vectors
pad_value : an integer, the scalar pad value used in inputs
name : a string, the name of the layer
embeddings_initializer : the initializer of the embedding matrix
embeddings_kwargs : kwargs to get embedding matrix variable

References

  • [1] Iyyer, M., Manjunatha, V., Boyd-Graber, J., & Daum’e III, H. (2015). Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Association for Computational Linguistics.
  • [2] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016).`Bag of Tricks for Efficient Text Classification. <http://arxiv.org/abs/1607.01759>`_

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Dense layer

Dense layer

class tensorlayer.layers.DenseLayer(layer=None, n_units=100, act=<function identity>, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='dense_layer')[source]

The DenseLayer class is a fully connected layer.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

n_units : int

The number of units of the layer.

act : activation function

The function that is applied to the layer activations.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer or None

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights tf.get_variable.

b_init_args : dictionary

The arguments for the biases tf.get_variable.

name : a string or None

An optional name to attach to this layer.

Notes

If the input to this layer has more than two axes, it need to flatten the input by using FlattenLayer in this case.

Examples

>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.DenseLayer(
...                 network,
...                 n_units=800,
...                 act = tf.nn.relu,
...                 W_init=tf.truncated_normal_initializer(stddev=0.1),
...                 name ='relu_layer'
...                 )
>>> Without TensorLayer, you can do as follow.
>>> W = tf.Variable(
...     tf.random_uniform([n_in, n_units], -1.0, 1.0), name='W')
>>> b = tf.Variable(tf.zeros(shape=[n_units]), name='b')
>>> y = tf.nn.relu(tf.matmul(inputs, W) + b)

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Reconstruction layer for Autoencoder

class tensorlayer.layers.ReconLayer(layer=None, x_recon=None, name='recon_layer', n_units=784, act=<function softplus>)[source]

The ReconLayer class is a reconstruction layer DenseLayer which use to pre-train a DenseLayer.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

x_recon : tensorflow variable

The variables used for reconstruction.

name : a string or None

An optional name to attach to this layer.

n_units : int

The number of units of the layer, should be equal to x_recon

act : activation function

The activation function that is applied to the reconstruction layer. Normally, for sigmoid layer, the reconstruction activation is sigmoid; for rectifying layer, the reconstruction activation is softplus.

Notes

The input layer should be DenseLayer or a layer has only one axes. You may need to modify this part to define your own cost function. By default, the cost is implemented as follow: - For sigmoid layer, the implementation can be UFLDL - For rectifying layer, the implementation can be Glorot (2011). Deep Sparse Rectifier Neural Networks

Examples

>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.DenseLayer(network, n_units=196,
...                                 act=tf.nn.sigmoid, name='sigmoid1')
>>> recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784,
...                                 act=tf.nn.sigmoid, name='recon_layer1')
>>> recon_layer1.pretrain(sess, x=x, X_train=X_train, X_val=X_val,
...                         denoise_name=None, n_epoch=1200, batch_size=128,
...                         print_freq=10, save=True, save_name='w1pre_')

Methods

pretrain(self, sess, x, X_train, X_val, denoise_name=None, n_epoch=100, batch_size=128, print_freq=10, save=True, save_name=’w1pre_’) Start to pre-train the parameters of previous DenseLayer.

Noise layer

Dropout layer

class tensorlayer.layers.DropoutLayer(layer=None, keep=0.5, is_fix=False, is_train=True, seed=None, name='dropout_layer')[source]

The DropoutLayer class is a noise layer which randomly set some values to zero by a given keeping probability.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

keep : float

The keeping probability, the lower more values will be set to zero.

is_fix : boolean

Default False, if True, the keeping probability is fixed and cannot be changed via feed_dict.

is_train : boolean

If False, skip this layer, default is True.

seed : int or None

An integer or None to create random seed.

name : a string or None

An optional name to attach to this layer.

Notes

In many simple cases, user may find it is better to use one inference instead of two inferences for training and testing seperately, DropoutLayer allows you to control the dropout rate via feed_dict. However, you can fix the keeping probability by setting is_fix to True.

Examples

  • Define network
>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
>>> network = tl.layers.DenseLayer(network, n_units=800, act = tf.nn.relu, name='relu1')
>>> ...
  • For training, enable dropout as follow.
>>> feed_dict = {x: X_train_a, y_: y_train_a}
>>> feed_dict.update( network.all_drop )     # enable noise layers
>>> sess.run(train_op, feed_dict=feed_dict)
>>> ...
  • For testing, disable dropout as follow.
>>> dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers
>>> feed_dict = {x: X_val_a, y_: y_val_a}
>>> feed_dict.update(dp_dict)
>>> err, ac = sess.run([cost, acc], feed_dict=feed_dict)
>>> ...

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Gaussian noise layer

class tensorlayer.layers.GaussianNoiseLayer(layer=None, mean=0.0, stddev=1.0, is_train=True, seed=None, name='gaussian_noise_layer')[source]

The GaussianNoiseLayer class is noise layer that adding noise with normal distribution to the activation.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

mean : float
stddev : float
is_train : boolean

If False, skip this layer, default is True.

seed : int or None

An integer or None to create random seed.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Dropconnect + Dense layer

class tensorlayer.layers.DropconnectDenseLayer(layer=None, keep=0.5, n_units=100, act=<function identity>, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='dropconnect_layer')[source]

The DropconnectDenseLayer class is DenseLayer with DropConnect behaviour which randomly remove connection between this layer to previous layer by a given keeping probability.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

keep : float

The keeping probability, the lower more values will be set to zero.

n_units : int

The number of units of the layer.

act : activation function

The function that is applied to the layer activations.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer

The initializer for initializing the bias vector.

W_init_args : dictionary

The arguments for the weights tf.get_variable().

b_init_args : dictionary

The arguments for the biases tf.get_variable().

name : a string or None

An optional name to attach to this layer.

References

Examples

>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.8,
...         n_units=800, act = tf.nn.relu, name='dropconnect_relu1')
>>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
...         n_units=800, act = tf.nn.relu, name='dropconnect_relu2')
>>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
...         n_units=10, act = tl.activation.identity, name='output_layer')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Convolutional layer (Pro)

1D Convolution

class tensorlayer.layers.Conv1dLayer(layer=None, act=<function identity>, shape=[5, 1, 5], stride=1, dilation_rate=1, padding='SAME', use_cudnn_on_gpu=None, data_format='NWC', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='cnn_layer')[source]

The Conv1dLayer class is a 1D CNN layer, see tf.nn.convolution.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer, [batch, in_width, in_channels].

act : activation function, None for identity.
shape : list of shape

shape of the filters, [filter_length, in_channels, out_channels].

stride : an int.

The number of entries by which the filter is moved right at each step.

dilation_rate : an int.

Specifies the filter upsampling/input downsampling rate.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

use_cudnn_on_gpu : An optional bool. Defaults to True.
data_format : As it is 1D conv, default is ‘NWC’.
W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer or None

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights tf.get_variable().

b_init_args : dictionary

The arguments for the biases tf.get_variable().

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D Convolution

class tensorlayer.layers.Conv2dLayer(layer=None, act=<function identity>, shape=[5, 5, 1, 100], strides=[1, 1, 1, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, use_cudnn_on_gpu=None, data_format=None, name='cnn_layer')[source]

The Conv2dLayer class is a 2D CNN layer, see tf.nn.conv2d.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function

The function that is applied to the layer activations.

shape : list of shape

shape of the filters, [filter_height, filter_width, in_channels, out_channels].

strides : a list of ints.

The stride of the sliding window for each dimension of input.

It Must be in the same order as the dimension specified with format.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer or None

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights tf.get_variable().

b_init_args : dictionary

The arguments for the biases tf.get_variable().

use_cudnn_on_gpu : bool, default is None.
data_format : string “NHWC” or “NCHW”, default is “NHWC”
name : a string or None

An optional name to attach to this layer.

Notes

  • shape = [h, w, the number of output channel of previous layer, the number of output channels]
  • the number of output channel of a layer is its last dimension.

Examples

>>> x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.Conv2dLayer(network,
...                   act = tf.nn.relu,
...                   shape = [5, 5, 1, 32],  # 32 features for each 5x5 patch
...                   strides=[1, 1, 1, 1],
...                   padding='SAME',
...                   W_init=tf.truncated_normal_initializer(stddev=5e-2),
...                   W_init_args={},
...                   b_init = tf.constant_initializer(value=0.0),
...                   b_init_args = {},
...                   name ='cnn_layer1')     # output: (?, 28, 28, 32)
>>> network = tl.layers.PoolLayer(network,
...                   ksize=[1, 2, 2, 1],
...                   strides=[1, 2, 2, 1],
...                   padding='SAME',
...                   pool = tf.nn.max_pool,
...                   name ='pool_layer1',)   # output: (?, 14, 14, 32)
>>> Without TensorLayer, you can implement 2d convolution as follow.
>>> W = tf.Variable(W_init(shape=[5, 5, 1, 32], ), name='W_conv')
>>> b = tf.Variable(b_init(shape=[32], ), name='b_conv')
>>> outputs = tf.nn.relu( tf.nn.conv2d(inputs, W,
...                       strides=[1, 1, 1, 1],
...                       padding='SAME') + b )

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D Deconvolution

class tensorlayer.layers.DeConv2dLayer(layer=None, act=<function identity>, shape=[3, 3, 128, 256], output_shape=[1, 256, 256, 128], strides=[1, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn2d_layer')[source]

The DeConv2dLayer class is deconvolutional 2D layer, see tf.nn.conv2d_transpose.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function

The function that is applied to the layer activations.

shape : list of shape

shape of the filters, [height, width, output_channels, in_channels], filter’s in_channels dimension must match that of value.

output_shape : list of output shape

representing the output shape of the deconvolution op.

strides : a list of ints.

The stride of the sliding window for each dimension of the input tensor.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights initializer.

b_init_args : dictionary

The arguments for the biases initializer.

name : a string or None

An optional name to attach to this layer.

Notes

  • shape = [h, w, the number of output channels of this layer, the number of output channel of previous layer]
  • output_shape = [batch_size, any, any, the number of output channels of this layer]
  • the number of output channel of a layer is its last dimension.

Examples

  • A part of the generator in DCGAN example
>>> batch_size = 64
>>> inputs = tf.placeholder(tf.float32, [batch_size, 100], name='z_noise')
>>> net_in = tl.layers.InputLayer(inputs, name='g/in')
>>> net_h0 = tl.layers.DenseLayer(net_in, n_units = 8192,
...                            W_init = tf.random_normal_initializer(stddev=0.02),
...                            act = tf.identity, name='g/h0/lin')
>>> print(net_h0.outputs._shape)
... (64, 8192)
>>> net_h0 = tl.layers.ReshapeLayer(net_h0, shape = [-1, 4, 4, 512], name='g/h0/reshape')
>>> net_h0 = tl.layers.BatchNormLayer(net_h0, act=tf.nn.relu, is_train=is_train, name='g/h0/batch_norm')
>>> print(net_h0.outputs._shape)
... (64, 4, 4, 512)
>>> net_h1 = tl.layers.DeConv2dLayer(net_h0,
...                            shape = [5, 5, 256, 512],
...                            output_shape = [batch_size, 8, 8, 256],
...                            strides=[1, 2, 2, 1],
...                            act=tf.identity, name='g/h1/decon2d')
>>> net_h1 = tl.layers.BatchNormLayer(net_h1, act=tf.nn.relu, is_train=is_train, name='g/h1/batch_norm')
>>> print(net_h1.outputs._shape)
... (64, 8, 8, 256)
  • U-Net
>>> ....
>>> conv10 = tl.layers.Conv2dLayer(conv9, act=tf.nn.relu,
...        shape=[3,3,1024,1024], strides=[1,1,1,1], padding='SAME',
...        W_init=w_init, b_init=b_init, name='conv10')
>>> print(conv10.outputs)
... (batch_size, 32, 32, 1024)
>>> deconv1 = tl.layers.DeConv2dLayer(conv10, act=tf.nn.relu,
...         shape=[3,3,512,1024], strides=[1,2,2,1], output_shape=[batch_size,64,64,512],
...         padding='SAME', W_init=w_init, b_init=b_init, name='devcon1_1')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

3D Convolution

class tensorlayer.layers.Conv3dLayer(layer=None, act=<function identity>, shape=[2, 2, 2, 64, 128], strides=[1, 2, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='cnn3d_layer')[source]

The Conv3dLayer class is a 3D CNN layer, see tf.nn.conv3d.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function

The function that is applied to the layer activations.

shape : list of shape

shape of the filters, [filter_depth, filter_height, filter_width, in_channels, out_channels].

strides : a list of ints. 1-D of length 4.

The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer

The initializer for initializing the bias vector.

W_init_args : dictionary

The arguments for the weights initializer.

b_init_args : dictionary

The arguments for the biases initializer.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

3D Deconvolution

class tensorlayer.layers.DeConv3dLayer(layer=None, act=<function identity>, shape=[2, 2, 2, 128, 256], output_shape=[1, 12, 32, 32, 128], strides=[1, 2, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn3d_layer')[source]

The DeConv3dLayer class is deconvolutional 3D layer, see tf.nn.conv3d_transpose.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function

The function that is applied to the layer activations.

shape : list of shape

shape of the filters, [depth, height, width, output_channels, in_channels], filter’s in_channels dimension must match that of value.

output_shape : list of output shape

representing the output shape of the deconvolution op.

strides : a list of ints.

The stride of the sliding window for each dimension of the input tensor.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer

The initializer for initializing the bias vector.

W_init_args : dictionary

The arguments for the weights initializer.

b_init_args : dictionary

The arguments for the biases initializer.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D UpSampling

class tensorlayer.layers.UpSampling2dLayer(layer=None, size=[], is_scale=True, method=0, align_corners=False, name='upsample2d_layer')[source]

The UpSampling2dLayer class is upSampling 2d layer, see tf.image.resize_images.

Parameters:
layer : a layer class with 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
size : a tuple of int or float.

(height, width) scale factor or new size of height and width.

is_scale : boolean, if True (default), size is scale factor, otherwise, size is number of pixels of height and width.
method : 0, 1, 2, 3. ResizeMethod. Defaults to ResizeMethod.BILINEAR.
  • ResizeMethod.BILINEAR, Bilinear interpolation.
  • ResizeMethod.NEAREST_NEIGHBOR, Nearest neighbor interpolation.
  • ResizeMethod.BICUBIC, Bicubic interpolation.
  • ResizeMethod.AREA, Area interpolation.
align_corners : bool. If true, exactly align all 4 corners of the input and output. Defaults to false.
name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D DownSampling

class tensorlayer.layers.DownSampling2dLayer(layer=None, size=[], is_scale=True, method=0, align_corners=False, name='downsample2d_layer')[source]

The DownSampling2dLayer class is downSampling 2d layer, see tf.image.resize_images.

Parameters:
layer : a layer class with 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
size : a tupe of int or float.

(height, width) scale factor or new size of height and width.

is_scale : boolean, if True (default), size is scale factor, otherwise, size is number of pixels of height and width.
method : 0, 1, 2, 3. ResizeMethod. Defaults to ResizeMethod.BILINEAR.
  • ResizeMethod.BILINEAR, Bilinear interpolation.
  • ResizeMethod.NEAREST_NEIGHBOR, Nearest neighbor interpolation.
  • ResizeMethod.BICUBIC, Bicubic interpolation.
  • ResizeMethod.AREA, Area interpolation.
align_corners : bool. If true, exactly align all 4 corners of the input and output. Defaults to false.
name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D Deformable Conv

class tensorlayer.layers.DeformableConv2dLayer(layer=None, act=<function identity>, offset_layer=None, shape=[3, 3, 1, 100], name='deformable_conv_2d_layer', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={})[source]

The DeformableConv2dLayer class is a Deformable Convolutional Networks .

Parameters:
layer : TensorLayer layer.
offset_layer : TensorLayer layer, to predict the offset of convolutional operations. The shape of its output should be (batchsize, input height, input width, 2*(number of element in the convolutional kernel))

e.g. if apply a 3*3 kernel, the number of the last dimension should be 18 (2*3*3)

channel_multiplier : int, The number of channels to expand to.
filter_size : tuple (height, width) for filter size.
strides : tuple (height, width) for strides. Current implementation fix to (1, 1, 1, 1)
act : None or activation function.
shape : list of shape

shape of the filters, [filter_height, filter_width, in_channels, out_channels].

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer or None

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights tf.get_variable().

b_init_args : dictionary

The arguments for the biases tf.get_variable().

name : a string or None

An optional name to attach to this layer.

Notes

  • The stride is fixed as (1, 1, 1, 1).
  • The padding is fixed as ‘SAME’.
  • The current implementation is memory-inefficient, please use carefully.

References

Examples

>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> offset_1 = tl.layers.Conv2dLayer(layer=network, act=act, shape=[3, 3, 3, 18], strides=[1, 1, 1, 1],padding='SAME', name='offset_layer1')
>>> network = tl.layers.DeformableConv2dLayer(layer=network, act=act, offset_layer=offset_1,  shape=[3, 3, 3, 32],  name='deformable_conv_2d_layer1')
>>> offset_2 = tl.layers.Conv2dLayer(layer=network, act=act, shape=[3, 3, 32, 18], strides=[1, 1, 1, 1], padding='SAME', name='offset_layer2')
>>> network = tl.layers.DeformableConv2dLayer(layer=network, act = act, offset_layer=offset_2, shape=[3, 3, 32, 64], name='deformable_conv_2d_layer2')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

1D Atrous convolution

tensorlayer.layers.AtrousConv1dLayer(net, n_filter=32, filter_size=2, stride=1, dilation=1, act=None, padding='SAME', use_cudnn_on_gpu=None, data_format='NWC', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='conv1d')[source]

Wrapper for AtrousConv1dLayer, if you don’t understand how to use Conv1dLayer, this function may be easier.

Parameters:
net : TensorLayer layer.
n_filter : number of filter.
filter_size : an int.
stride : an int.
dilation : an int, filter dilation size.
act : None or activation function.
others : see Conv1dLayer.

2D Atrous convolution

class tensorlayer.layers.AtrousConv2dLayer(layer=None, n_filter=32, filter_size=(3, 3), rate=2, act=None, padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='atrou2d')[source]

The AtrousConv2dLayer class is Atrous convolution (a.k.a. convolution with holes or dilated convolution) 2D layer, see tf.nn.atrous_conv2d.

Parameters:
layer : a layer class with 4-D Tensor of shape [batch, height, width, channels].
filters : A 4-D Tensor with the same type as value and shape [filter_height, filter_width, in_channels, out_channels]. filters’ in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters’ spatial dimensions.
n_filter : number of filter.
filter_size : tuple (height, width) for filter size.
rate : A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation.
act : activation function, None for linear.
padding : A string, either ‘VALID’ or ‘SAME’. The padding algorithm.
W_init : weights initializer. The initializer for initializing the weight matrix.
b_init : biases initializer or None. The initializer for initializing the bias vector. If None, skip biases.
W_init_args : dictionary. The arguments for the weights tf.get_variable().
b_init_args : dictionary. The arguments for the biases tf.get_variable().
name : a string or None, an optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Convolutional layer (Simplified)

For users don’t familiar with TensorFlow, the following simplified functions may easier for you. We will provide more simplified functions later, but if you are good at TensorFlow, the professional APIs may better for you.

1D Convolution

tensorlayer.layers.Conv1d(net, n_filter=32, filter_size=5, stride=1, dilation_rate=1, act=None, padding='SAME', use_cudnn_on_gpu=None, data_format='NWC', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='conv1d')[source]

Wrapper for Conv1dLayer, if you don’t understand how to use Conv1dLayer, this function may be easier.

Parameters:
net : TensorLayer layer.
n_filter : number of filter.
filter_size : an int.
stride : an int.
dilation_rate : As it is 1D conv, the default is “NWC”.
act : None or activation function.
others : see Conv1dLayer.

Examples

>>> x = tf.placeholder(tf.float32, [batch_size, width])
>>> y_ = tf.placeholder(tf.int64, shape=[batch_size,])
>>> n = InputLayer(x, name='in')
>>> n = ReshapeLayer(n, [-1, width, 1], name='rs')
>>> n = Conv1d(n, 64, 3, 1, act=tf.nn.relu, name='c1')
>>> n = MaxPool1d(n, 2, 2, padding='valid', name='m1')
>>> n = Conv1d(n, 128, 3, 1, act=tf.nn.relu, name='c2')
>>> n = MaxPool1d(n, 2, 2, padding='valid', name='m2')
>>> n = Conv1d(n, 128, 3, 1, act=tf.nn.relu, name='c3')
>>> n = MaxPool1d(n, 2, 2, padding='valid', name='m3')
>>> n = FlattenLayer(n, name='f')
>>> n = DenseLayer(n, 500, tf.nn.relu, name='d1')
>>> n = DenseLayer(n, 100, tf.nn.relu, name='d2')
>>> n = DenseLayer(n, 2, tf.identity, name='o')

2D Convolution

tensorlayer.layers.Conv2d(net, n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, use_cudnn_on_gpu=None, data_format=None, name='conv2d')[source]

Wrapper for Conv2dLayer, if you don’t understand how to use Conv2dLayer, this function may be easier.

Parameters:
net : TensorLayer layer.
n_filter : number of filter.
filter_size : tuple (height, width) for filter size.
strides : tuple (height, width) for strides.
act : None or activation function.
others : see Conv2dLayer.

Examples

>>> w_init = tf.truncated_normal_initializer(stddev=0.01)
>>> b_init = tf.constant_initializer(value=0.0)
>>> inputs = InputLayer(x, name='inputs')
>>> conv1 = Conv2d(inputs, 64, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv1_1')
>>> conv1 = Conv2d(conv1, 64, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv1_2')
>>> pool1 = MaxPool2d(conv1, (2, 2), padding='SAME', name='pool1')
>>> conv2 = Conv2d(pool1, 128, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv2_1')
>>> conv2 = Conv2d(conv2, 128, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv2_2')
>>> pool2 = MaxPool2d(conv2, (2, 2), padding='SAME', name='pool2')

2D Deconvolution

tensorlayer.layers.DeConv2d(net, n_out_channel=32, filter_size=(3, 3), out_size=(30, 30), strides=(2, 2), padding='SAME', batch_size=None, act=None, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn2d')[source]

Wrapper for DeConv2dLayer, if you don’t understand how to use DeConv2dLayer, this function may be easier.

Parameters:
net : TensorLayer layer.
n_out_channel : int, number of output channel.
filter_size : tuple of (height, width) for filter size.
out_size : tuple of (height, width) of output.
batch_size : int or None, batch_size. If None, try to find the batch_size from the first dim of net.outputs (you should tell the batch_size when define the input placeholder).
strides : tuple of (height, width) for strides.
act : None or activation function.
others : see DeConv2dLayer.

1D Max pooling

tensorlayer.layers.MaxPool1d(net, filter_size, strides, padding='valid', data_format='channels_last', name=None)[source]

Wrapper for tf.layers.max_pooling1d .

Parameters:
net : TensorLayer layer, the tensor over which to pool. Must have rank 3.
filter_size (pool_size) : An integer or tuple/list of a single integer, representing the size of the pooling window.
strides : An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
padding : A string. The padding method, either ‘valid’ or ‘same’. Case-insensitive.
data_format : A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
name : A string, the name of the layer.
Returns:
- A :class:`Layer` which the output tensor, of rank 3.

1D Mean pooling

tensorlayer.layers.MeanPool1d(net, filter_size, strides, padding='valid', data_format='channels_last', name=None)[source]

Wrapper for tf.layers.average_pooling1d .

Parameters:
net : TensorLayer layer, the tensor over which to pool. Must have rank 3.
filter_size (pool_size) : An integer or tuple/list of a single integer, representing the size of the pooling window.
strides : An integer or tuple/list of a single integer, specifying the strides of the pooling operation.
padding : A string. The padding method, either ‘valid’ or ‘same’. Case-insensitive.
data_format : A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, length, channels) while channels_first corresponds to inputs with shape (batch, channels, length).
name : A string, the name of the layer.
Returns:
- A :class:`Layer` which the output tensor, of rank 3.

2D Max pooling

tensorlayer.layers.MaxPool2d(net, filter_size=(2, 2), strides=None, padding='SAME', name='maxpool')[source]

Wrapper for PoolLayer.

Parameters:
net : TensorLayer layer.
filter_size : tuple of (height, width) for filter size.
strides : tuple of (height, width). Default is the same with filter_size.
others : see PoolLayer.

2D Mean pooling

tensorlayer.layers.MeanPool2d(net, filter_size=(2, 2), strides=None, padding='SAME', name='meanpool')[source]

Wrapper for PoolLayer.

Parameters:
net : TensorLayer layer.
filter_size : tuple of (height, width) for filter size.
strides : tuple of (height, width). Default is the same with filter_size.
others : see PoolLayer.

3D Max pooling

tensorlayer.layers.MaxPool3d(net, filter_size, strides, padding='valid', data_format='channels_last', name=None)[source]

Wrapper for tf.layers.max_pooling3d .

Parameters:
net : TensorLayer layer, the tensor over which to pool. Must have rank 5.
filter_size (pool_size) : An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides : An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding : A string. The padding method, either ‘valid’ or ‘same’. Case-insensitive.
data_format : A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
name : A string, the name of the layer.

3D Mean pooling

tensorlayer.layers.MeanPool3d(net, filter_size, strides, padding='valid', data_format='channels_last', name=None)[source]

Wrapper for tf.layers.average_pooling3d

Parameters:
net : TensorLayer layer, the tensor over which to pool. Must have rank 5.
filter_size (pool_size) : An integer or tuple/list of 3 integers: (pool_depth, pool_height, pool_width) specifying the size of the pooling window. Can be a single integer to specify the same value for all spatial dimensions.
strides : An integer or tuple/list of 3 integers, specifying the strides of the pooling operation. Can be a single integer to specify the same value for all spatial dimensions.
padding : A string. The padding method, either ‘valid’ or ‘same’. Case-insensitive.
data_format : A string. The ordering of the dimensions in the inputs. channels_last (default) and channels_first are supported. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width).
name : A string, the name of the layer.

2D Depthwise/Separable Conv

class tensorlayer.layers.DepthwiseConv2d(layer=None, channel_multiplier=3, shape=(3, 3), strides=(1, 1), act=None, padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='depthwise_conv2d')[source]

Separable/Depthwise Convolutional 2D, see tf.nn.depthwise_conv2d.

Input:
4-D Tensor [batch, height, width, in_channels].
Output:
4-D Tensor [batch, new height, new width, in_channels * channel_multiplier].
Parameters:
net : TensorLayer layer.
channel_multiplier : int, The number of channels to expand to.
filter_size : tuple (height, width) for filter size.
strides : tuple (height, width) for strides.
act : None or activation function.
padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

W_init : weights initializer

The initializer for initializing the weight matrix.

b_init : biases initializer or None

The initializer for initializing the bias vector. If None, skip biases.

W_init_args : dictionary

The arguments for the weights tf.get_variable().

b_init_args : dictionary

The arguments for the biases tf.get_variable().

name : a string or None

An optional name to attach to this layer.

References

Examples

>>> t_im = tf.placeholder("float32", [None, 256, 256, 3])
>>> net = InputLayer(t_im, name='in')
>>> net = DepthwiseConv2d(net, 32, (3, 3), (1, 1, 1, 1), tf.nn.relu, padding="SAME", name='dep')
>>> print(net.outputs.get_shape())
... (?, 256, 256, 96)

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Super-Resolution layer

1D Subpixel Convolution

tensorlayer.layers.SubpixelConv1d(net, scale=2, act=<function identity>, name='subpixel_conv1d')[source]

One-dimensional subpixel upsampling layer. Calls a tensorflow function that directly implements this functionality. We assume input has dim (batch, width, r)

Parameters:
net : TensorLayer layer.
scale : int, upscaling ratio, a wrong setting will lead to Dimension size error.
act : activation function.
name : string.

An optional name to attach to this layer.

References

Examples

>>> t_signal = tf.placeholder('float32', [10, 100, 4], name='x')
>>> n = InputLayer(t_signal, name='in')
>>> n = SubpixelConv1d(n, scale=2, name='s')
>>> print(n.outputs.shape)
... (10, 200, 2)

2D Subpixel Convolution

tensorlayer.layers.SubpixelConv2d(net, scale=2, n_out_channel=None, act=<function identity>, name='subpixel_conv2d')[source]

It is a sub-pixel 2d upsampling layer, usually be used for Super-Resolution applications, see example code.

Parameters:
net : TensorLayer layer.
scale : int, upscaling ratio, a wrong setting will lead to Dimension size error.
n_out_channel : int or None, the number of output channels.

Note that, the number of input channels == (scale x scale) x The number of output channels. If None, automatically set n_out_channel == the number of input channels / (scale x scale).

act : activation function.
name : string.

An optional name to attach to this layer.

References

Examples

>>> # examples here just want to tell you how to set the n_out_channel.
>>> x = np.random.rand(2, 16, 16, 4)
>>> X = tf.placeholder("float32", shape=(2, 16, 16, 4), name="X")
>>> net = InputLayer(X, name='input')
>>> net = SubpixelConv2d(net, scale=2, n_out_channel=1, name='subpixel_conv2d')
>>> y = sess.run(net.outputs, feed_dict={X: x})
>>> print(x.shape, y.shape)
... (2, 16, 16, 4) (2, 32, 32, 1)
>>>
>>> x = np.random.rand(2, 16, 16, 4*10)
>>> X = tf.placeholder("float32", shape=(2, 16, 16, 4*10), name="X")
>>> net = InputLayer(X, name='input2')
>>> net = SubpixelConv2d(net, scale=2, n_out_channel=10, name='subpixel_conv2d2')
>>> y = sess.run(net.outputs, feed_dict={X: x})
>>> print(x.shape, y.shape)
... (2, 16, 16, 40) (2, 32, 32, 10)
>>>
>>> x = np.random.rand(2, 16, 16, 25*10)
>>> X = tf.placeholder("float32", shape=(2, 16, 16, 25*10), name="X")
>>> net = InputLayer(X, name='input3')
>>> net = SubpixelConv2d(net, scale=5, n_out_channel=None, name='subpixel_conv2d3')
>>> y = sess.run(net.outputs, feed_dict={X: x})
>>> print(x.shape, y.shape)
... (2, 16, 16, 250) (2, 80, 80, 10)

Spatial Transformer

2D Affine Transformation

class tensorlayer.layers.SpatialTransformer2dAffineLayer(layer=None, theta_layer=None, out_size=[40, 40], name='sapatial_trans_2d_affine')[source]

The SpatialTransformer2dAffineLayer class is a Spatial Transformer Layer for 2D Affine Transformation.

Parameters:
layer : a layer class with 4-D Tensor of shape [batch, height, width, channels]
theta_layer : a layer class for the localisation network.

In this layer, we will use a DenseLayer to make the theta size to [batch, 6], value range to [0, 1] (via tanh).

out_size : tuple of two ints.

The size of the output of the network (height, width), the feature maps will be resized by this.

References

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

2D Affine Transformation function

tensorlayer.layers.transformer(U, theta, out_size, name='SpatialTransformer2dAffine', **kwargs)[source]

Spatial Transformer Layer for 2D Affine Transformation , see SpatialTransformer2dAffineLayer class.

Parameters:
U : float

The output of a convolutional net should have the shape [num_batch, height, width, num_channels].

theta: float

The output of the localisation network should be [num_batch, 6], value range should be [0, 1] (via tanh).

out_size: tuple of two ints

The size of the output of the network (height, width)

Notes

  • To initialize the network to the identity transform init.
>>> ``theta`` to
>>> identity = np.array([[1., 0., 0.],
...                      [0., 1., 0.]])
>>> identity = identity.flatten()
>>> theta = tf.Variable(initial_value=identity)

References

Batch 2D Affine Transformation function

tensorlayer.layers.batch_transformer(U, thetas, out_size, name='BatchSpatialTransformer2dAffine')[source]

Batch Spatial Transformer function for 2D Affine Transformation.

Parameters:
U : float

tensor of inputs [batch, height, width, num_channels]

thetas : float

a set of transformations for each input [batch, num_transforms, 6]

out_size : int

the size of the output [out_height, out_width]

Returns: float

Tensor of size [batch * num_transforms, out_height, out_width, num_channels]

Pooling layer

Pooling layer for any dimensions and any pooling functions.

class tensorlayer.layers.PoolLayer(layer=None, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=<function max_pool>, name='pool_layer')[source]

The PoolLayer class is a Pooling layer, you can choose tf.nn.max_pool and tf.nn.avg_pool for 2D or tf.nn.max_pool3d and tf.nn.avg_pool3d for 3D.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

ksize : a list of ints that has length >= 4.

The size of the window for each dimension of the input tensor.

strides : a list of ints that has length >= 4.

The stride of the sliding window for each dimension of the input tensor.

padding : a string from: “SAME”, “VALID”.

The type of padding algorithm to use.

pool : a pooling function
  • see TensorFlow pooling APIs
  • class tf.nn.max_pool
  • class tf.nn.avg_pool
  • class tf.nn.max_pool3d
  • class tf.nn.avg_pool3d
name : a string or None

An optional name to attach to this layer.

Examples

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Padding

Padding layer for any modes.

class tensorlayer.layers.PadLayer(layer=None, paddings=None, mode='CONSTANT', name='pad_layer')[source]

The PadLayer class is a Padding layer for any modes and dimensions. Please see tf.pad for usage.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

padding : a Tensor of type int32.
mode : one of “CONSTANT”, “REFLECT”, or “SYMMETRIC” (case-insensitive)
name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Normalization layer

For local response normalization as it does not have any weights and arguments, you can also apply tf.nn.lrn on network.outputs.

Batch Normalization

class tensorlayer.layers.BatchNormLayer(layer=None, decay=0.9, epsilon=1e-05, act=<function identity>, is_train=False, beta_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, gamma_init=<tensorflow.python.ops.init_ops.RandomNormal object>, name='batchnorm_layer')[source]

The BatchNormLayer class is a normalization layer, see tf.nn.batch_normalization and tf.nn.moments.

Batch normalization on fully-connected or convolutional maps.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

decay : float, default is 0.9.

A decay factor for ExponentialMovingAverage, use larger value for large dataset.

epsilon : float

A small float number to avoid dividing by 0.

act : activation function.
is_train : boolean

Whether train or inference.

beta_init : beta initializer

The initializer for initializing beta

gamma_init : gamma initializer

The initializer for initializing gamma

dtype : tf.float32 (default) or tf.float16
name : a string or None

An optional name to attach to this layer.

References

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Local Response Normalization

class tensorlayer.layers.LocalResponseNormLayer(layer=None, depth_radius=None, bias=None, alpha=None, beta=None, name='lrn_layer')[source]

The LocalResponseNormLayer class is for Local Response Normalization, see tf.nn.local_response_normalization or tf.nn.lrn for new TF version. The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius.

Parameters:
layer : a layer class. Must be one of the following types: float32, half. 4-D.
depth_radius : An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.
bias : An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).
alpha : An optional float. Defaults to 1. A scale factor, usually positive.
beta : An optional float. Defaults to 0.5. An exponent.
name : A string or None, an optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Instance Normalization

class tensorlayer.layers.InstanceNormLayer(layer=None, act=<function identity>, epsilon=1e-05, scale_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, offset_init=<tensorflow.python.ops.init_ops.Constant object>, name='instan_norm')[source]

The InstanceNormLayer class is a for instance normalization.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function.
epsilon : float

A small float number.

scale_init : beta initializer

The initializer for initializing beta

offset_init : gamma initializer

The initializer for initializing gamma

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Layer Normalization

class tensorlayer.layers.LayerNormLayer(layer=None, center=True, scale=True, act=<function identity>, reuse=None, variables_collections=None, outputs_collections=None, trainable=True, begin_norm_axis=1, begin_params_axis=-1, name='layernorm')[source]

The LayerNormLayer class is for layer normalization, see tf.contrib.layers.layer_norm.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

act : activation function

The function that is applied to the layer activations.

others : see tf.contrib.layers.layer_norm

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Object Detection

ROI layer

class tensorlayer.layers.ROIPoolingLayer(layer=None, rois=None, pool_height=2, pool_width=2, name='roipooling_layer')[source]

The ROIPoolingLayer class is Region of interest pooling layer.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer, the feature maps on which to perform the pooling operation

rois : list of regions of interest in the format (feature map index, upper left, bottom right)
pool_width : int, size of the pooling sections.
pool_width : int, size of the pooling sections.

Notes

  • This implementation is from Deepsense-AI .
  • Please install it by the instruction HERE.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Time distributed layer

class tensorlayer.layers.TimeDistributedLayer(layer=None, layer_class=None, args={}, name='time_distributed')[source]

The TimeDistributedLayer class that applies a function to every timestep of the input tensor. For example, if using DenseLayer as the layer_class, inputs [batch_size , length, dim] outputs [batch_size , length, new_dim].

Parameters:
layer : a Layer instance

The Layer class feeding into this layer, [batch_size , length, dim]

layer_class : a Layer class
args : dictionary

The arguments for the layer_class.

name : a string or None

An optional name to attach to this layer.

Examples

>>> batch_size = 32
>>> timestep = 20
>>> input_dim = 100
>>> x = tf.placeholder(dtype=tf.float32, shape=[batch_size, timestep,  input_dim], name="encode_seqs")
>>> net = InputLayer(x, name='input')
>>> net = TimeDistributedLayer(net, layer_class=DenseLayer, args={'n_units':50, 'name':'dense'}, name='time_dense')
... [TL] InputLayer  input: (32, 20, 100)
... [TL] TimeDistributedLayer time_dense: layer_class:DenseLayer
>>> print(net.outputs._shape)
... (32, 20, 50)
>>> net.print_params(False)
... param   0: (100, 50)          time_dense/dense/W:0
... param   1: (50,)              time_dense/dense/b:0
... num of params: 5050

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Fixed Length Recurrent layer

All recurrent layers can implement any type of RNN cell by feeding different cell function (LSTM, GRU etc).

RNN layer

class tensorlayer.layers.RNNLayer(layer=None, cell_fn=None, cell_init_args={}, n_hidden=100, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, n_steps=5, initial_state=None, return_last=False, return_seq_2d=False, name='rnn_layer')[source]

The RNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
cell_init_args : a dictionary

The arguments for the cell initializer.

n_hidden : an int

The number of hidden units in the layer.

initializer : initializer

The initializer for initializing the parameters.

n_steps : an int

The sequence length.

initial_state : None or RNN State

If None, initial_state is zero_state.

return_last : boolean
  • If True, return the last output, “Sequence input and single output”
  • If False, return all outputs, “Synced sequence input and output”
  • In other word, if you want to apply one or more RNN(s) on this layer, set to False.
return_seq_2d : boolean
  • When return_last = False
  • If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer after it.
  • If False, return 3D Tensor [n_example/n_steps, n_steps, n_hidden], for stacking multiple RNN after it.
name : a string or None

An optional name to attach to this layer.

Notes

Input dimension should be rank 3 : [batch_size, n_steps, n_features], if no, please see ReshapeLayer.

References

Examples

  • For words
>>> input_data = tf.placeholder(tf.int32, [batch_size, num_steps])
>>> net = tl.layers.EmbeddingInputlayer(
...                 inputs = input_data,
...                 vocabulary_size = vocab_size,
...                 embedding_size = hidden_size,
...                 E_init = tf.random_uniform_initializer(-init_scale, init_scale),
...                 name ='embedding_layer')
>>> net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_train, name='drop1')
>>> net = tl.layers.RNNLayer(net,
...             cell_fn=tf.contrib.rnn.BasicLSTMCell,
...             cell_init_args={'forget_bias': 0.0},# 'state_is_tuple': True},
...             n_hidden=hidden_size,
...             initializer=tf.random_uniform_initializer(-init_scale, init_scale),
...             n_steps=num_steps,
...             return_last=False,
...             name='basic_lstm_layer1')
>>> lstm1 = net
>>> net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_train, name='drop2')
>>> net = tl.layers.RNNLayer(net,
...             cell_fn=tf.contrib.rnn.BasicLSTMCell,
...             cell_init_args={'forget_bias': 0.0}, # 'state_is_tuple': True},
...             n_hidden=hidden_size,
...             initializer=tf.random_uniform_initializer(-init_scale, init_scale),
...             n_steps=num_steps,
...             return_last=False,
...             return_seq_2d=True,
...             name='basic_lstm_layer2')
>>> lstm2 = net
>>> net = tl.layers.DropoutLayer(net, keep=keep_prob, is_fix=True, is_train=is_train, name='drop3')
>>> net = tl.layers.DenseLayer(net,
...             n_units=vocab_size,
...             W_init=tf.random_uniform_initializer(-init_scale, init_scale),
...             b_init=tf.random_uniform_initializer(-init_scale, init_scale),
...             act = tl.activation.identity, name='output_layer')
  • For CNN+LSTM
>>> x = tf.placeholder(tf.float32, shape=[batch_size, image_size, image_size, 1])
>>> net = tl.layers.InputLayer(x, name='input_layer')
>>> net = tl.layers.Conv2dLayer(net,
...                         act = tf.nn.relu,
...                         shape = [5, 5, 1, 32],  # 32 features for each 5x5 patch
...                         strides=[1, 2, 2, 1],
...                         padding='SAME',
...                         name ='cnn_layer1')
>>> net = tl.layers.PoolLayer(net,
...                         ksize=[1, 2, 2, 1],
...                         strides=[1, 2, 2, 1],
...                         padding='SAME',
...                         pool = tf.nn.max_pool,
...                         name ='pool_layer1')
>>> net = tl.layers.Conv2dLayer(net,
...                         act = tf.nn.relu,
...                         shape = [5, 5, 32, 10], # 10 features for each 5x5 patch
...                         strides=[1, 2, 2, 1],
...                         padding='SAME',
...                         name ='cnn_layer2')
>>> net = tl.layers.PoolLayer(net,
...                         ksize=[1, 2, 2, 1],
...                         strides=[1, 2, 2, 1],
...                         padding='SAME',
...                         pool = tf.nn.max_pool,
...                         name ='pool_layer2')
>>> net = tl.layers.FlattenLayer(net, name='flatten_layer')
>>> net = tl.layers.ReshapeLayer(net, shape=[-1, num_steps, int(net.outputs._shape[-1])])
>>> rnn1 = tl.layers.RNNLayer(net,
...                         cell_fn=tf.nn.rnn_cell.LSTMCell,
...                         cell_init_args={},
...                         n_hidden=200,
...                         initializer=tf.random_uniform_initializer(-0.1, 0.1),
...                         n_steps=num_steps,
...                         return_last=False,
...                         return_seq_2d=True,
...                         name='rnn_layer')
>>> net = tl.layers.DenseLayer(rnn1, n_units=3,
...                         act = tl.activation.identity, name='output_layer')
Attributes:
outputs : a tensor

The output of this RNN. return_last = False, outputs = all cell_output, which is the hidden state.

cell_output.get_shape() = (?, n_hidden)

final_state : a tensor or StateTuple

When state_is_tuple = False, it is the final hidden and cell states, states.get_shape() = [?, 2 * n_hidden].

When state_is_tuple = True, it stores two elements: (c, h), in that order. You can get the final state after each iteration during training, then feed it to the initial state of next iteration.

initial_state : a tensor or StateTuple

It is the initial state of this RNN layer, you can use it to initialize your state at the begining of each epoch or iteration according to your training procedure.

batch_size : int or tensor

Is int, if able to compute the batch_size, otherwise, tensor for ?.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Bidirectional layer

class tensorlayer.layers.BiRNNLayer(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True, 'use_peepholes': True}, n_hidden=100, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, n_steps=5, fw_initial_state=None, bw_initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, name='birnn_layer')[source]

The BiRNNLayer class is a Bidirectional RNN layer.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
cell_init_args : a dictionary

The arguments for the cell initializer.

n_hidden : an int

The number of hidden units in the layer.

initializer : initializer

The initializer for initializing the parameters.

n_steps : an int

The sequence length.

fw_initial_state : None or forward RNN State

If None, initial_state is zero_state.

bw_initial_state : None or backward RNN State

If None, initial_state is zero_state.

dropout : tuple of float: (input_keep_prob, output_keep_prob).

The input and output keep probability.

n_layer : an int, default is 1.

The number of RNN layers.

return_last : boolean
  • If True, return the last output, “Sequence input and single output”
  • If False, return all outputs, “Synced sequence input and output”
  • In other word, if you want to apply one or more RNN(s) on this layer, set to False.
return_seq_2d : boolean
  • When return_last = False
  • If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer after it.
  • If False, return 3D Tensor [n_example/n_steps, n_steps, n_hidden], for stacking multiple RNN after it.
name : a string or None

An optional name to attach to this layer.

Notes

  • Input dimension should be rank 3 : [batch_size, n_steps, n_features], if no, please see ReshapeLayer.
  • For predicting, the sequence length has to be the same with the sequence length of training, while, for normal

RNN, we can use sequence length of 1 for predicting.

References

Attributes:
outputs : a tensor

The output of this RNN. return_last = False, outputs = all cell_output, which is the hidden state.

cell_output.get_shape() = (?, n_hidden)

fw(bw)_final_state : a tensor or StateTuple

When state_is_tuple = False, it is the final hidden and cell states, states.get_shape() = [?, 2 * n_hidden].

When state_is_tuple = True, it stores two elements: (c, h), in that order. You can get the final state after each iteration during training, then feed it to the initial state of next iteration.

fw(bw)_initial_state : a tensor or StateTuple

It is the initial state of this RNN layer, you can use it to initialize your state at the begining of each epoch or iteration according to your training procedure.

batch_size : int or tensor

Is int, if able to compute the batch_size, otherwise, tensor for ?.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Recurrent Convolutional layer

Conv RNN Cell

class tensorlayer.layers.ConvRNNCell[source]

Abstract object representing an Convolutional RNN Cell.

Attributes:
output_size

Integer or TensorShape: size of outputs produced by this cell.

state_size

size(s) of state(s) used by this cell.

Methods

__call__(inputs, state[, scope]) Run this RNN cell on inputs, starting from the given state.
zero_state(batch_size, dtype) Return zero-filled state tensor(s).

Basic Conv LSTM Cell

class tensorlayer.layers.BasicConvLSTMCell(shape, filter_size, num_features, forget_bias=1.0, input_size=None, state_is_tuple=False, activation=<function tanh>)[source]

Basic Conv LSTM recurrent network cell.

Parameters:
shape : int tuple thats the height and width of the cell
filter_size : int tuple thats the height and width of the filter
num_features : int thats the depth of the cell
forget_bias : float, The bias added to forget gates (see above).
input_size : Deprecated and unused.
state_is_tuple : If True, accepted and returned states are 2-tuples of

the c_state and m_state. If False, they are concatenated along the column axis. The latter behavior will soon be deprecated.

activation : Activation function of the inner states.
Attributes:
output_size

Number of units in outputs.

state_size

State size of the LSTMStateTuple.

Methods

__call__(inputs, state[, scope]) Long short-term memory cell (LSTM).
zero_state(batch_size, dtype) Return zero-filled state tensor(s).

Conv LSTM layer

class tensorlayer.layers.ConvLSTMLayer(layer=None, cell_shape=None, feature_map=1, filter_size=(3, 3), cell_fn=<class 'tensorlayer.layers.BasicConvLSTMCell'>, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, n_steps=5, initial_state=None, return_last=False, return_seq_2d=False, name='convlstm_layer')[source]

The ConvLSTMLayer class is a Convolutional LSTM layer, see Convolutional LSTM Layer .

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

cell_shape : tuple, the shape of each cell width*height
filter_size : tuple, the size of filter width*height
cell_fn : a Convolutional RNN cell as follow.
feature_map : a int

The number of feature map in the layer.

initializer : initializer

The initializer for initializing the parameters.

n_steps : a int

The sequence length.

initial_state : None or ConvLSTM State

If None, initial_state is zero_state.

return_last : boolen
  • If True, return the last output, “Sequence input and single output”
  • If False, return all outputs, “Synced sequence input and output”
  • In other word, if you want to apply one or more ConvLSTM(s) on this layer, set to False.
return_seq_2d : boolen
  • When return_last = False
  • If True, return 4D Tensor [n_example, h, w, c], for stacking DenseLayer after it.
  • If False, return 5D Tensor [n_example/n_steps, h, w, c], for stacking multiple ConvLSTM after it.
name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Advanced Ops for Dynamic RNN

These operations usually be used inside Dynamic RNN layer, they can compute the sequence lengths for different situation and get the last RNN outputs by indexing.

Output indexing

tensorlayer.layers.advanced_indexing_op(input, index)[source]

Advanced Indexing for Sequences, returns the outputs by given sequence lengths. When return the last output DynamicRNNLayer uses it to get the last outputs with the sequence lengths.

Parameters:
input : tensor for data

[batch_size, n_step(max), n_features]

index : tensor for indexing, i.e. sequence_length in Dynamic RNN.

[batch_size]

References

  • Modified from TFlearn (the original code is used for fixed length rnn), references.

Examples

>>> batch_size, max_length, n_features = 3, 5, 2
>>> z = np.random.uniform(low=-1, high=1, size=[batch_size, max_length, n_features]).astype(np.float32)
>>> b_z = tf.constant(z)
>>> sl = tf.placeholder(dtype=tf.int32, shape=[batch_size])
>>> o = advanced_indexing_op(b_z, sl)
>>>
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>>
>>> order = np.asarray([1,1,2])
>>> print("real",z[0][order[0]-1], z[1][order[1]-1], z[2][order[2]-1])
>>> y = sess.run([o], feed_dict={sl:order})
>>> print("given",order)
>>> print("out", y)
... real [-0.93021595  0.53820813] [-0.92548317 -0.77135968] [ 0.89952248  0.19149846]
... given [1 1 2]
... out [array([[-0.93021595,  0.53820813],
...             [-0.92548317, -0.77135968],
...             [ 0.89952248,  0.19149846]], dtype=float32)]

Compute Sequence length 1

tensorlayer.layers.retrieve_seq_length_op(data)[source]

An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros.

Parameters:
data : tensor

[batch_size, n_step(max), n_features] with zero padding on right hand side.

References

Examples

>>> data = [[[1],[2],[0],[0],[0]],
...         [[1],[2],[3],[0],[0]],
...         [[1],[2],[6],[1],[0]]]
>>> data = np.asarray(data)
>>> print(data.shape)
... (3, 5, 1)
>>> data = tf.constant(data)
>>> sl = retrieve_seq_length_op(data)
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> y = sl.eval()
... [2 3 4]
  • Multiple features
>>> data = [[[1,2],[2,2],[1,2],[1,2],[0,0]],
...         [[2,3],[2,4],[3,2],[0,0],[0,0]],
...         [[3,3],[2,2],[5,3],[1,2],[0,0]]]
>>> print(sl)
... [4 3 4]

Compute Sequence length 2

tensorlayer.layers.retrieve_seq_length_op2(data)[source]

An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros.

Parameters:
data : tensor

[batch_size, n_step(max)] with zero padding on right hand side.

Examples

>>> data = [[1,2,0,0,0],
...         [1,2,3,0,0],
...         [1,2,6,1,0]]
>>> o = retrieve_seq_length_op2(data)
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> print(o.eval())
... [2 3 4]

Dynamic RNN layer

RNN layer

class tensorlayer.layers.DynamicRNNLayer(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, dynamic_rnn_init_args={}, name='dyrnn_layer')[source]

The DynamicRNNLayer class is a Dynamic RNN layer, see tf.nn.dynamic_rnn.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
cell_init_args : a dictionary

The arguments for the cell initializer.

n_hidden : an int

The number of hidden units in the layer.

initializer : initializer

The initializer for initializing the parameters.

sequence_length : a tensor, array or None. The sequence length of each row of input data, see Advanced Ops for Dynamic RNN.
  • If None, it uses retrieve_seq_length_op to compute the sequence_length, i.e. when the features of padding (on right hand side) are all zeros.
  • If using word embedding, you may need to compute the sequence_length from the ID array (the integer features before word embedding) by using retrieve_seq_length_op2 or retrieve_seq_length_op.
  • You can also input an numpy array.
  • More details about TensorFlow dynamic_rnn in Wild-ML Blog.
initial_state : None or RNN State

If None, initial_state is zero_state.

dropout : tuple of float: (input_keep_prob, output_keep_prob).

The input and output keep probability.

n_layer : an int, default is 1.

The number of RNN layers.

return_last : boolean
  • If True, return the last output, “Sequence input and single output”
  • If False, return all outputs, “Synced sequence input and output”
  • In other word, if you want to apply one or more RNN(s) on this layer, set to False.
return_seq_2d : boolean
  • When return_last = False
  • If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer or computing cost after it.
  • If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), n_hidden], for stacking multiple RNN after it.
name : a string or None

An optional name to attach to this layer.

Notes

Input dimension should be rank 3 : [batch_size, n_steps(max), n_features], if no, please see ReshapeLayer.

References

Examples

>>> input_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="input_seqs")
>>> net = tl.layers.EmbeddingInputlayer(
...             inputs = input_seqs,
...             vocabulary_size = vocab_size,
...             embedding_size = embedding_size,
...             name = 'seq_embedding')
>>> net = tl.layers.DynamicRNNLayer(net,
...             cell_fn = tf.contrib.rnn.BasicLSTMCell, # for TF0.2 tf.nn.rnn_cell.BasicLSTMCell,
...             n_hidden = embedding_size,
...             dropout = 0.7,
...             sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs),
...             return_seq_2d = True,     # stack denselayer or compute cost after it
...             name = 'dynamic_rnn')
... net = tl.layers.DenseLayer(net, n_units=vocab_size,
...             act=tf.identity, name="output")

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Bidirectional layer

class tensorlayer.layers.BiDynamicRNNLayer(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, sequence_length=None, fw_initial_state=None, bw_initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, dynamic_rnn_init_args={}, name='bi_dyrnn_layer')[source]

The BiDynamicRNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
cell_init_args : a dictionary

The arguments for the cell initializer.

n_hidden : an int

The number of hidden units in the layer.

initializer : initializer

The initializer for initializing the parameters.

sequence_length : a tensor, array or None.
The sequence length of each row of input data, see Advanced Ops for Dynamic RNN.
  • If None, it uses retrieve_seq_length_op to compute the sequence_length, i.e. when the features of padding (on right hand side) are all zeros.
  • If using word embedding, you may need to compute the sequence_length from the ID array (the integer features before word embedding) by using retrieve_seq_length_op2 or retrieve_seq_length_op.
  • You can also input an numpy array.
  • More details about TensorFlow dynamic_rnn in Wild-ML Blog.
fw_initial_state : None or forward RNN State

If None, initial_state is zero_state.

bw_initial_state : None or backward RNN State

If None, initial_state is zero_state.

dropout : tuple of float: (input_keep_prob, output_keep_prob).

The input and output keep probability.

n_layer : an int, default is 1.

The number of RNN layers.

return_last : boolean

If True, return the last output, “Sequence input and single output”

If False, return all outputs, “Synced sequence input and output”

In other word, if you want to apply one or more RNN(s) on this layer, set to False.

return_seq_2d : boolean
  • When return_last = False
  • If True, return 2D Tensor [n_example, 2 * n_hidden], for stacking DenseLayer or computing cost after it.
  • If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), 2 * n_hidden], for stacking multiple RNN after it.
name : a string or None

An optional name to attach to this layer.

Notes

Input dimension should be rank 3 : [batch_size, n_steps(max), n_features], if no, please see ReshapeLayer.

References

Attributes:
outputs : a tensor

The output of this RNN. return_last = False, outputs = all cell_output, which is the hidden state.

cell_output.get_shape() = (?, 2 * n_hidden)

fw(bw)_final_state : a tensor or StateTuple

When state_is_tuple = False, it is the final hidden and cell states, states.get_shape() = [?, 2 * n_hidden].

When state_is_tuple = True, it stores two elements: (c, h), in that order. You can get the final state after each iteration during training, then feed it to the initial state of next iteration.

fw(bw)_initial_state : a tensor or StateTuple

It is the initial state of this RNN layer, you can use it to initialize your state at the begining of each epoch or iteration according to your training procedure.

sequence_length : a tensor or array, shape = [batch_size]

The sequence lengths computed by Advanced Opt or the given sequence lengths.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Sequence to Sequence

Simple Seq2Seq

class tensorlayer.layers.Seq2Seq(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, encode_sequence_length=None, decode_sequence_length=None, initial_state_encode=None, initial_state_decode=None, dropout=None, n_layer=1, return_seq_2d=False, name='seq2seq')[source]

The Seq2Seq class is a Simple DynamicRNNLayer based Seq2seq layer without using tl.contrib.seq2seq. See Model and Sequence to Sequence Learning with Neural Networks.

Parameters:
net_encode_in : a Layer instance

Encode sequences, [batch_size, None, n_features].

net_decode_in : a Layer instance

Decode sequences, [batch_size, None, n_features].

cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
cell_init_args : a dictionary

The arguments for the cell initializer.

n_hidden : an int

The number of hidden units in the layer.

initializer : initializer

The initializer for initializing the parameters.

encode_sequence_length : tensor for encoder sequence length, see DynamicRNNLayer .
decode_sequence_length : tensor for decoder sequence length, see DynamicRNNLayer .
initial_state_encode : None or RNN state (from placeholder or other RNN).

If None, initial_state_encode is of zero state.

initial_state_decode : None or RNN state (from placeholder or other RNN).

If None, initial_state_decode is of the final state of the RNN encoder.

dropout : tuple of float: (input_keep_prob, output_keep_prob).

The input and output keep probability.

n_layer : an int, default is 1.

The number of RNN layers.

return_seq_2d : boolean
  • When return_last = False
  • If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer or computing cost after it.
  • If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), n_hidden], for stacking multiple RNN after it.
name : a string or None

An optional name to attach to this layer.

Notes

  • How to feed data: Sequence to Sequence Learning with Neural Networks
  • input_seqs : ['how', 'are', 'you', '<PAD_ID'>]
  • decode_seqs : ['<START_ID>', 'I', 'am', 'fine', '<PAD_ID'>]
  • target_seqs : ['I', 'am', 'fine', '<END_ID', '<PAD_ID'>]
  • target_mask : [1, 1, 1, 1, 0]
  • related functions : tl.prepro <pad_sequences, precess_sequences, sequences_add_start_id, sequences_get_mask>

Examples

>>> from tensorlayer.layers import *
>>> batch_size = 32
>>> encode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="encode_seqs")
>>> decode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="decode_seqs")
>>> target_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="target_seqs")
>>> target_mask = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="target_mask") # tl.prepro.sequences_get_mask()
>>> with tf.variable_scope("model"):
...     # for chatbot, you can use the same embedding layer,
...     # for translation, you may want to use 2 seperated embedding layers
>>>     with tf.variable_scope("embedding") as vs:
>>>         net_encode = EmbeddingInputlayer(
...                 inputs = encode_seqs,
...                 vocabulary_size = 10000,
...                 embedding_size = 200,
...                 name = 'seq_embedding')
>>>         vs.reuse_variables()
>>>         tl.layers.set_name_reuse(True)
>>>         net_decode = EmbeddingInputlayer(
...                 inputs = decode_seqs,
...                 vocabulary_size = 10000,
...                 embedding_size = 200,
...                 name = 'seq_embedding')
>>>     net = Seq2Seq(net_encode, net_decode,
...             cell_fn = tf.contrib.rnn.BasicLSTMCell,
...             n_hidden = 200,
...             initializer = tf.random_uniform_initializer(-0.1, 0.1),
...             encode_sequence_length = retrieve_seq_length_op2(encode_seqs),
...             decode_sequence_length = retrieve_seq_length_op2(decode_seqs),
...             initial_state_encode = None,
...             dropout = None,
...             n_layer = 1,
...             return_seq_2d = True,
...             name = 'seq2seq')
>>> net_out = DenseLayer(net, n_units=10000, act=tf.identity, name='output')
>>> e_loss = tl.cost.cross_entropy_seq_with_mask(logits=net_out.outputs, target_seqs=target_seqs, input_mask=target_mask, return_details=False, name='cost')
>>> y = tf.nn.softmax(net_out.outputs)
>>> net_out.print_params(False)
Attributes:
outputs : a tensor

The output of RNN decoder.

initial_state_encode : a tensor or StateTuple

Initial state of RNN encoder.

initial_state_decode : a tensor or StateTuple

Initial state of RNN decoder.

final_state_encode : a tensor or StateTuple

Final state of RNN encoder.

final_state_decode : a tensor or StateTuple

Final state of RNN decoder.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

PeekySeq2Seq

class tensorlayer.layers.PeekySeq2Seq(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, in_sequence_length=None, out_sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_seq_2d=False, name='peeky_seq2seq')[source]

Waiting for contribution. The PeekySeq2Seq class, see Model and Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation .

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

AttentionSeq2Seq

class tensorlayer.layers.AttentionSeq2Seq(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, in_sequence_length=None, out_sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_seq_2d=False, name='attention_seq2seq')[source]

Waiting for contribution. The AttentionSeq2Seq class, see Model and Neural Machine Translation by Jointly Learning to Align and Translate .

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Shape layer

Flatten layer

class tensorlayer.layers.FlattenLayer(layer=None, name='flatten_layer')[source]

The FlattenLayer class is layer which reshape high-dimension input to a vector. Then we can apply DenseLayer, RNNLayer, ConcatLayer and etc on the top of it.

[batch_size, mask_row, mask_col, n_mask] —> [batch_size, mask_row * mask_col * n_mask]

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

name : a string or None

An optional name to attach to this layer.

Examples

>>> x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
>>> net = tl.layers.InputLayer(x, name='input_layer')
>>> net = tl.layers.Conv2dLayer(net,
...                    act = tf.nn.relu,
...                    shape = [5, 5, 32, 64],
...                    strides=[1, 1, 1, 1],
...                    padding='SAME',
...                    name ='cnn_layer')
>>> net = tl.layers.Pool2dLayer(net,
...                    ksize=[1, 2, 2, 1],
...                    strides=[1, 2, 2, 1],
...                    padding='SAME',
...                    pool = tf.nn.max_pool,
...                    name ='pool_layer',)
>>> net = tl.layers.FlattenLayer(net, name='flatten_layer')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Reshape layer

class tensorlayer.layers.ReshapeLayer(layer=None, shape=[], name='reshape_layer')[source]

The ReshapeLayer class is layer which reshape the tensor.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

shape : a list

The output shape.

name : a string or None

An optional name to attach to this layer.

Examples

  • The core of this layer is tf.reshape.
  • Use TensorFlow only :
>>> x = tf.placeholder(tf.float32, shape=[None, 3])
>>> y = tf.reshape(x, shape=[-1, 3, 3])
>>> sess = tf.InteractiveSession()
>>> print(sess.run(y, feed_dict={x:[[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5],[6,6,6]]}))
... [[[ 1.  1.  1.]
... [ 2.  2.  2.]
... [ 3.  3.  3.]]
... [[ 4.  4.  4.]
... [ 5.  5.  5.]
... [ 6.  6.  6.]]]

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Transpose layer

class tensorlayer.layers.TransposeLayer(layer=None, perm=None, name='transpose')[source]

The TransposeLayer class transpose the dimension of a teneor, see tf.transpose() .

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

perm: list, a permutation of the dimensions

Similar with numpy.transpose.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Lambda layer

class tensorlayer.layers.LambdaLayer(layer=None, fn=None, fn_args={}, name='lambda_layer')[source]

The LambdaLayer class is a layer which is able to use the provided function.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

fn : a function

The function that applies to the outputs of previous layer.

fn_args : a dictionary

The arguments for the function (option).

name : a string or None

An optional name to attach to this layer.

Examples

>>> x = tf.placeholder(tf.float32, shape=[None, 1], name='x')
>>> net = tl.layers.InputLayer(x, name='input_layer')
>>> net = LambdaLayer(net, lambda x: 2*x, name='lambda_layer')
>>> y = net.outputs
>>> sess = tf.InteractiveSession()
>>> out = sess.run(y, feed_dict={x : [[1],[2]]})
... [[2],[4]]

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Merge layer

Concat layer

class tensorlayer.layers.ConcatLayer(layer=[], concat_dim=1, name='concat_layer')[source]

The ConcatLayer class is layer which concat (merge) two or more tensor by given axis..

Parameters:
layer : a list of Layer instances

The Layer class feeding into this layer.

concat_dim : int

Dimension along which to concatenate.

name : a string or None

An optional name to attach to this layer.

Examples

>>> sess = tf.InteractiveSession()
>>> x = tf.placeholder(tf.float32, shape=[None, 784])
>>> inputs = tl.layers.InputLayer(x, name='input_layer')
>>> net1 = tl.layers.DenseLayer(inputs, 800, act=tf.nn.relu, name='relu1_1')
>>> net2 = tl.layers.DenseLayer(inputs, 300, act=tf.nn.relu, name='relu2_1')
>>> net = tl.layers.ConcatLayer([net1, net2], 1, name ='concat_layer')
...     [TL] InputLayer input_layer (?, 784)
...     [TL] DenseLayer relu1_1: 800, relu
...     [TL] DenseLayer relu2_1: 300, relu
...     [TL] ConcatLayer concat_layer, 1100
>>> tl.layers.initialize_global_variables(sess)
>>> net.print_params()
...     param 0: (784, 800) (mean: 0.000021, median: -0.000020 std: 0.035525)
...     param 1: (800,)     (mean: 0.000000, median: 0.000000  std: 0.000000)
...     param 2: (784, 300) (mean: 0.000000, median: -0.000048 std: 0.042947)
...     param 3: (300,)     (mean: 0.000000, median: 0.000000  std: 0.000000)
...     num of params: 863500
>>> net.print_layers()
...     layer 0: ("Relu:0", shape=(?, 800), dtype=float32)
...     layer 1: Tensor("Relu_1:0", shape=(?, 300), dtype=float32)

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Element-wise layer

class tensorlayer.layers.ElementwiseLayer(layer=[], combine_fn=<function minimum>, name='elementwise_layer')[source]

The ElementwiseLayer class combines multiple Layer which have the same output shapes by a given elemwise-wise operation.

Parameters:
layer : a list of Layer instances

The Layer class feeding into this layer.

combine_fn : a TensorFlow elemwise-merge function

e.g. AND is tf.minimum ; OR is tf.maximum ; ADD is tf.add ; MUL is tf.multiply and so on. See TensorFlow Math API .

name : a string or None

An optional name to attach to this layer.

Examples

  • AND Logic
>>> net_0 = tl.layers.DenseLayer(net_0, n_units=500,
...                        act = tf.nn.relu, name='net_0')
>>> net_1 = tl.layers.DenseLayer(net_1, n_units=500,
...                        act = tf.nn.relu, name='net_1')
>>> net_com = tl.layers.ElementwiseLayer(layer = [net_0, net_1],
...                         combine_fn = tf.minimum,
...                         name = 'combine_layer')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Extend layer

Expand dims layer

class tensorlayer.layers.ExpandDimsLayer(layer=None, axis=None, name='expand_dims')[source]

The ExpandDimsLayer class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() .

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

axis : int, 0-D (scalar).

Specifies the dimension index at which to expand the shape of input.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Tile layer

class tensorlayer.layers.TileLayer(layer=None, multiples=None, name='tile')[source]

The TileLayer class constructs a tensor by tiling a given tensor, see tf.tile() .

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

multiples: a list of int

Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Stack layer

Stack layer

class tensorlayer.layers.StackLayer(layer=[], axis=0, name='stack')[source]

The StackLayer class is layer for stacking a list of rank-R tensors into one rank-(R+1) tensor, see tf.stack().

Parameters:
layer : a list of Layer instances

The Layer class feeding into this layer.

axis : an int

Dimension along which to concatenate.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Unstack layer

tensorlayer.layers.UnStackLayer(layer=None, num=None, axis=0, name='unstack')[source]

The UnStackLayer is layer for unstacking the given dimension of a rank-R tensor into rank-(R-1) tensors., see tf.unstack().

Parameters:
layer : a list of Layer instances

The Layer class feeding into this layer.

num : an int

The length of the dimension axis. Automatically inferred if None (the default).

axis : an int

Dimension along which to concatenate.

name : a string or None

An optional name to attach to this layer.

Returns:
The list of layer objects unstacked from the input.

Estimator layer

class tensorlayer.layers.EstimatorLayer(layer=None, model_fn=None, args={}, name='estimator_layer')[source]

The EstimatorLayer class accepts model_fn that described the model. It is similar with KerasLayer, see tutorial_keras.py. This layer will be deprecated soon as LambdaLayer can do the same thing.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

model_fn : a function that described the model.
args : dictionary

The arguments for the model_fn.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Connect TF-Slim

Yes ! TF-Slim models can be connected into TensorLayer, all Google’s Pre-trained model can be used easily , see Slim-model .

class tensorlayer.layers.SlimNetsLayer(layer=None, slim_layer=None, slim_args={}, name='tfslim_layer')[source]

The SlimNetsLayer class can be used to merge all TF-Slim nets into TensorLayer. Models can be found in slim-model, see Inception V3 example on Github.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

slim_layer : a slim network function

The network you want to stack onto, end with return net, end_points.

slim_args : dictionary

The arguments for the slim model.

name : a string or None

An optional name to attach to this layer.

Notes

The due to TF-Slim stores the layers as dictionary, the all_layers in this network is not in order ! Fortunately, the all_params are in order.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Connect Keras

Yes ! Keras models can be connected into TensorLayer! see tutorial_keras.py .

class tensorlayer.layers.KerasLayer(layer=None, keras_layer=None, keras_args={}, name='keras_layer')[source]

The KerasLayer class can be used to merge all Keras layers into TensorLayer. Example can be found here tutorial_keras.py. This layer will be deprecated soon as LambdaLayer can do the same thing.

Parameters:
layer : a Layer instance

The Layer class feeding into this layer.

keras_layer : a keras network function
keras_args : dictionary

The arguments for the keras model.

name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Parametric activation layer

class tensorlayer.layers.PReluLayer(layer=None, channel_shared=False, a_init=<tensorflow.python.ops.init_ops.Constant object>, a_init_args={}, name='prelu_layer')[source]

The PReluLayer class is Parametric Rectified Linear layer.

Parameters:
x : A Tensor with type float, double, int32, int64, uint8,

int16, or int8.

channel_shared : bool. Single weight is shared by all channels
a_init : alpha initializer, default zero constant.

The initializer for initializing the alphas.

a_init_args : dictionary

The arguments for the weights initializer.

name : A name for this activation op (optional).

References

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Flow control layer

class tensorlayer.layers.MultiplexerLayer(layer=[], name='mux_layer')[source]

The MultiplexerLayer selects one of several input and forwards the selected input into the output, see tutorial_mnist_multiplexer.py.

Parameters:
layer : a list of Layer instances

The Layer class feeding into this layer.

name : a string or None

An optional name to attach to this layer.

References

Examples

>>> x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
>>> y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
>>> # define the network
>>> net_in = tl.layers.InputLayer(x, name='input_layer')
>>> net_in = tl.layers.DropoutLayer(net_in, keep=0.8, name='drop1')
>>> # net 0
>>> net_0 = tl.layers.DenseLayer(net_in, n_units=800,
...                                act = tf.nn.relu, name='net0/relu1')
>>> net_0 = tl.layers.DropoutLayer(net_0, keep=0.5, name='net0/drop2')
>>> net_0 = tl.layers.DenseLayer(net_0, n_units=800,
...                                act = tf.nn.relu, name='net0/relu2')
>>> # net 1
>>> net_1 = tl.layers.DenseLayer(net_in, n_units=800,
...                                act = tf.nn.relu, name='net1/relu1')
>>> net_1 = tl.layers.DropoutLayer(net_1, keep=0.8, name='net1/drop2')
>>> net_1 = tl.layers.DenseLayer(net_1, n_units=800,
...                                act = tf.nn.relu, name='net1/relu2')
>>> net_1 = tl.layers.DropoutLayer(net_1, keep=0.8, name='net1/drop3')
>>> net_1 = tl.layers.DenseLayer(net_1, n_units=800,
...                                act = tf.nn.relu, name='net1/relu3')
>>> # multiplexer
>>> net_mux = tl.layers.MultiplexerLayer(layer = [net_0, net_1], name='mux_layer')
>>> network = tl.layers.ReshapeLayer(net_mux, shape=[-1, 800], name='reshape_layer') #
>>> network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
>>> # output layer
>>> network = tl.layers.DenseLayer(network, n_units=10,
...                                act = tf.identity, name='output_layer')

Methods

count_params() Return the number of parameters in the network
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network

Wrapper

Embedding + Attention + Seq2seq

class tensorlayer.layers.EmbeddingAttentionSeq2seqWrapper(source_vocab_size, target_vocab_size, buckets, size, num_layers, max_gradient_norm, batch_size, learning_rate, learning_rate_decay_factor, use_lstm=False, num_samples=512, forward_only=False, name='wrapper')[source]

Sequence-to-sequence model with attention and for multiple buckets (Deprecated after TF0.12).

This example implements a multi-layer recurrent neural network as encoder, and an attention-based decoder. This is the same as the model described in this paper: - Grammar as a Foreign Language please look there for details, or into the seq2seq library for complete model implementation. This example also allows to use GRU cells in addition to LSTM cells, and sampled softmax to handle large output vocabulary size. A single-layer version of this model, but with bi-directional encoder, was presented in - Neural Machine Translation by Jointly Learning to Align and Translate The sampled softmax is described in Section 3 of the following paper. - On Using Very Large Target Vocabulary for Neural Machine Translation

Parameters:
source_vocab_size : size of the source vocabulary.
target_vocab_size : size of the target vocabulary.
buckets : a list of pairs (I, O), where I specifies maximum input length

that will be processed in that bucket, and O specifies maximum output length. Training instances that have inputs longer than I or outputs longer than O will be pushed to the next bucket and padded accordingly. We assume that the list is sorted, e.g., [(2, 4), (8, 16)].

size : number of units in each layer of the model.
num_layers : number of layers in the model.
max_gradient_norm : gradients will be clipped to maximally this norm.
batch_size : the size of the batches used during training;

the model construction is independent of batch_size, so it can be changed after initialization if this is convenient, e.g., for decoding.

learning_rate : learning rate to start with.
learning_rate_decay_factor : decay learning rate by this much when needed.
use_lstm : if true, we use LSTM cells instead of GRU cells.
num_samples : number of samples for sampled softmax.
forward_only : if set, we do not construct the backward pass in the model.
name : a string or None

An optional name to attach to this layer.

Methods

count_params() Return the number of parameters in the network
get_batch(data, bucket_id[, PAD_ID, GO_ID, …]) Get a random batch of data from the specified bucket, prepare for step.
print_layers() Print all info of layers in the network
print_params([details, session]) Print all info of parameters in the network
step(session, encoder_inputs, …) Run a step of the model feeding the given inputs.
get_batch(data, bucket_id, PAD_ID=0, GO_ID=1, EOS_ID=2, UNK_ID=3)[source]

Get a random batch of data from the specified bucket, prepare for step.

To feed data in step(..) it must be a list of batch-major vectors, while data here contains single length-major cases. So the main logic of this function is to re-index data cases to be in the proper format for feeding.

Parameters:
data : a tuple of size len(self.buckets) in which each element contains

lists of pairs of input and output data that we use to create a batch.

bucket_id : integer, which bucket to get the batch for.
PAD_ID : int

Index of Padding in vocabulary

GO_ID : int

Index of GO in vocabulary

EOS_ID : int

Index of End of sentence in vocabulary

UNK_ID : int

Index of Unknown word in vocabulary

Returns:
The triple (encoder_inputs, decoder_inputs, target_weights) for
the constructed batch that has the proper format to call step(…) later.
step(session, encoder_inputs, decoder_inputs, target_weights, bucket_id, forward_only)[source]

Run a step of the model feeding the given inputs.

Parameters:
session : tensorflow session to use.
encoder_inputs : list of numpy int vectors to feed as encoder inputs.
decoder_inputs : list of numpy int vectors to feed as decoder inputs.
target_weights : list of numpy float vectors to feed as target weights.
bucket_id : which bucket of the model to use.
forward_only : whether to do the backward step or only forward.
Returns:
A triple consisting of gradient norm (or None if we did not do backward),
average perplexity, and the outputs.
Raises:
ValueError : if length of encoder_inputs, decoder_inputs, or

target_weights disagrees with bucket size for the specified bucket_id.

Helper functions

Flatten tensor

tensorlayer.layers.flatten_reshape(variable, name='')[source]

Reshapes high-dimension input to a vector. [batch_size, mask_row, mask_col, n_mask] —> [batch_size, mask_row * mask_col * n_mask]

Parameters:
variable : a tensorflow variable
name : a string or None

An optional name to attach to this layer.

Examples

>>> W_conv2 = weight_variable([5, 5, 100, 32])   # 64 features for each 5x5 patch
>>> b_conv2 = bias_variable([32])
>>> W_fc1 = weight_variable([7 * 7 * 32, 256])
>>> h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
>>> h_pool2 = max_pool_2x2(h_conv2)
>>> h_pool2.get_shape()[:].as_list() = [batch_size, 7, 7, 32]
...         [batch_size, mask_row, mask_col, n_mask]
>>> h_pool2_flat = tl.layers.flatten_reshape(h_pool2)
...         [batch_size, mask_row * mask_col * n_mask]
>>> h_pool2_flat_drop = tf.nn.dropout(h_pool2_flat, keep_prob)
...

Permanent clear existing layer names

tensorlayer.layers.clear_layers_name()[source]

Clear all layer names in set_keep[‘_layers_name_list’], enable layer name reuse.

Examples

>>> network = tl.layers.InputLayer(x, name='input_layer')
>>> network = tl.layers.DenseLayer(network, n_units=800, name='relu1')
...
>>> tl.layers.clear_layers_name()
>>> network2 = tl.layers.InputLayer(x, name='input_layer')
>>> network2 = tl.layers.DenseLayer(network2, n_units=800, name='relu1')
...

Initialize RNN state

tensorlayer.layers.initialize_rnn_state(state, feed_dict=None)[source]

Returns the initialized RNN state. The inputs are LSTMStateTuple or State of RNNCells and an optional feed_dict.

Parameters:
state : a RNN state.
feed_dict : None or a dictionary for initializing the state values (optional).

If None, returns the zero state.

Remove repeated items in a list

tensorlayer.layers.list_remove_repeat(l=None)[source]

Remove the repeated items in a list, and return the processed list. You may need it to create merged layer like Concat, Elementwise and etc.

Parameters:
l : a list

Examples

>>> l = [2, 3, 4, 2, 3]
>>> l = list_remove_repeat(l)
... [2, 3, 4]

Merge networks attributes

tensorlayer.layers.merge_networks(layers=[])[source]

Merge all parameters, layers and dropout probabilities to a Layer.

Parameters:
layer : list of Layer instance

Merge all parameters, layers and dropout probabilities to the first layer in the list.

Examples

>>> n1 = ...
>>> n2 = ...
>>> n1 = merge_networks([n1, n2])