API - Activations

To make TensorLayer simple, we minimize the number of activation functions as much as we can. So we encourage you to use TensorFlow’s function. TensorFlow provides tf.nn.relu, tf.nn.relu6, tf.nn.elu, tf.nn.softplus, tf.nn.softsign and so on. For parametric activation, please read the layer APIs.

The shortcut of tensorlayer.activation is tensorlayer.act.

Your activation

Customizes activation function in TensorLayer is very easy. The following example implements an activation that multiplies its input by 2. For more complex activation, TensorFlow API will be required.

def double_activation(x):
    return x * 2

double_activation = lambda x: x * 2

A file containing various activation functions.

leaky_relu(x[, alpha, name])

leaky_relu can be used through its shortcut: tl.act.lrelu().

leaky_relu6(x[, alpha, name])

leaky_relu6() can be used through its shortcut: tl.act.lrelu6().

leaky_twice_relu6(x[, alpha_low, …])

leaky_twice_relu6() can be used through its shortcut: :func:`tl.act.ltrelu6().

ramp(x[, v_min, v_max, name])

Ramp activation function.

swish(x[, name])

Swish function.

sign(x)

Sign function.

hard_tanh(x[, name])

Hard tanh activation function.

pixel_wise_softmax(x[, name])

Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.

mish(x)

Mish activation function.

Ramp

tensorlayer.activation.ramp(x, v_min=0, v_max=1, name=None)[source]

Ramp activation function.

Reference: [tf.clip_by_value]<https://www.tensorflow.org/api_docs/python/tf/clip_by_value>

Parameters
  • x (Tensor) – input.

  • v_min (float) – cap input to v_min as a lower bound.

  • v_max (float) – cap input to v_max as a upper bound.

  • name (str) – The function name (optional).

Returns

A Tensor in the same type as x.

Return type

Tensor

Leaky ReLU

tensorlayer.activation.leaky_relu(x, alpha=0.2, name='leaky_relu')[source]

leaky_relu can be used through its shortcut: tl.act.lrelu().

This function is a modified version of ReLU, introducing a nonzero gradient for negative input. Introduced by the paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x >= 0: f(x) = x.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Slope.

  • name (str) – The function name (optional).

Examples

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

Leaky ReLU6

tensorlayer.activation.leaky_relu6(x, alpha=0.2, name='leaky_relu6')[source]

leaky_relu6() can be used through its shortcut: tl.act.lrelu6().

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6.

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha (float) – Slope.

  • name (str) – The function name (optional).

Examples

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_relu6(x, 0.2), name='dense')(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

Twice Leaky ReLU6

tensorlayer.activation.leaky_twice_relu6(x, alpha_low=0.2, alpha_high=0.2, name='leaky_relu6')[source]

leaky_twice_relu6() can be used through its shortcut: :func:`tl.act.ltrelu6().

This activation function is a modified version leaky_relu() introduced by the following paper: Rectifier Nonlinearities Improve Neural Network Acoustic Models [A. L. Maas et al., 2013]

This activation function also follows the behaviour of the activation function tf.nn.relu6() introduced by the following paper: Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010]

This function push further the logic by adding leaky behaviour both below zero and above six.

The function return the following results:
  • When x < 0: f(x) = alpha_low * x.

  • When x in [0, 6]: f(x) = x.

  • When x > 6: f(x) = 6 + (alpha_high * (x-6)).

Parameters
  • x (Tensor) – Support input type float, double, int32, int64, uint8, int16, or int8.

  • alpha_low (float) – Slope for x < 0: f(x) = alpha_low * x.

  • alpha_high (float) – Slope for x < 6: f(x) = 6 (alpha_high * (x-6)).

  • name (str) – The function name (optional).

Examples

>>> import tensorlayer as tl
>>> net = tl.layers.Input([10, 200])
>>> net = tl.layers.Dense(n_units=100, act=lambda x : tl.act.leaky_twice_relu6(x, 0.2, 0.2), name='dense')(net)
Returns

A Tensor in the same type as x.

Return type

Tensor

References

Swish

tensorlayer.activation.swish(x, name='swish')[source]

Swish function.

Parameters
  • x (Tensor) – input.

  • name (str) – function name (optional).

Returns

A Tensor in the same type as x.

Return type

Tensor

Sign

tensorlayer.activation.sign(x)[source]

Sign function.

Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for quantizing values in Binarized Neural Networks: https://arxiv.org/abs/1602.02830.

Parameters

x (Tensor) – input.

Returns

A Tensor in the same type as x.

Return type

Tensor

References

Hard Tanh

tensorlayer.activation.hard_tanh(x, name='htanh')[source]

Hard tanh activation function.

Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is htanh.

Parameters
  • x (Tensor) – input.

  • name (str) – The function name (optional).

Returns

A Tensor in the same type as x.

Return type

Tensor

Pixel-wise softmax

tensorlayer.activation.pixel_wise_softmax(x, name='pixel_wise_softmax')[source]

Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.

Warning

THIS FUNCTION IS DEPRECATED: It will be removed after after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.nn.softmax can do the same thing.

Usually be used for image segmentation.

Parameters
  • x (Tensor) –

    input.
    • For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.

    • For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.

  • name (str) – function name (optional)

Returns

A Tensor in the same type as x.

Return type

Tensor

Examples

>>> outputs = pixel_wise_softmax(network.outputs)
>>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)

References

mish

tensorlayer.activation.mish(x)[source]

Mish activation function.

Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019]<https://arxiv.org/abs/1908.08681>

Parameters

x (Tensor) – input.

Returns

A Tensor in the same type as x.

Return type

Tensor

Parametric activation

See tensorlayer.layers.