API - Activations

To make TensorLayer simple, we minimize the number of activation functions as much as we can. So we encourage you to use TensorFlow’s function. TensorFlow provides tf.nn.relu, tf.nn.relu6, tf.nn.elu, tf.nn.softplus, tf.nn.softsign and so on. More TensorFlow official activation functions can be found here. For parametric activation, please read the layer APIs.

The shortcut of tensorlayer.activation is tensorlayer.act.

Your activation

Customizes activation function in TensorLayer is very easy. The following example implements an activation that multiplies its input by 2. For more complex activation, TensorFlow API will be required.

def double_activation(x):
    return x * 2
identity(x) The identity activation function.
ramp(x[, v_min, v_max, name]) The ramp activation function.
leaky_relu(x[, alpha, name]) The LeakyReLU, Shortcut is lrelu.
swish(x[, name]) The Swish function.
sign(x) Sign function.
hard_tanh(x[, name]) Hard tanh activation function.
pixel_wise_softmax(x[, name]) Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.

Identity

tensorlayer.activation.identity(x)[source]

The identity activation function. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.identity can do the same thing.

Shortcut is linear.

Parameters:x (Tensor) – input.
Returns:A Tensor in the same type as x.
Return type:Tensor

Ramp

tensorlayer.activation.ramp(x, v_min=0, v_max=1, name=None)[source]

The ramp activation function.

Parameters:
  • x (Tensor) – input.
  • v_min (float) – cap input to v_min as a lower bound.
  • v_max (float) – cap input to v_max as a upper bound.
  • name (str) – The function name (optional).
Returns:

A Tensor in the same type as x.

Return type:

Tensor

Leaky Relu

tensorlayer.activation.leaky_relu(x, alpha=0.1, name='lrelu')[source]

The LeakyReLU, Shortcut is lrelu.

Modified version of ReLU, introducing a nonzero gradient for negative input.

Parameters:
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.
  • alpha (float) – Slope.
  • name (str) – The function name (optional).

Examples

>>> net = tl.layers.DenseLayer(net, 100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')
Returns:A Tensor in the same type as x.
Return type:Tensor

References

Swish

tensorlayer.activation.swish(x, name='swish')[source]
The Swish function.
See Swish: a Self-Gated Activation Function.
Parameters:
  • x (Tensor) – input.
  • name (str) – function name (optional).
Returns:

A Tensor in the same type as x.

Return type:

Tensor

Sign

tensorlayer.activation.sign(x)[source]

Sign function.

Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for quantizing values in Binarized Neural Networks.

Parameters:x (Tensor) – input.
Returns:A Tensor in the same type as x.
Return type:Tensor

References

Hard Tanh

tensorlayer.activation.hard_tanh(x, name='htanh')[source]

Hard tanh activation function.

Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is ``htanh`.

Parameters:
  • x (Tensor) – input.
  • name (str) – The function name (optional).
Returns:

A Tensor in the same type as x.

Return type:

Tensor

Pixel-wise softmax

tensorlayer.activation.pixel_wise_softmax(x, name='pixel_wise_softmax')[source]

Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1. (deprecated)

THIS FUNCTION IS DEPRECATED. It will be removed after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.nn.softmax can do the same thing.

Usually be used for image segmentation.

Parameters:
  • x (Tensor) –
    input.
    • For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.
    • For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.
  • name (str) – function name (optional)
Returns:

A Tensor in the same type as x.

Return type:

Tensor

Examples

>>> outputs = pixel_wise_softmax(network.outputs)
>>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)

References

Parametric activation

See tensorlayer.layers.