API - Activations¶
To make TensorLayer simple, we minimize the number of activation functions as much as
we can. So we encourage you to use TensorFlow’s function. TensorFlow provides
tf.nn.relu
, tf.nn.relu6
, tf.nn.elu
, tf.nn.softplus
,
tf.nn.softsign
and so on. More TensorFlow official activation functions can be found
here.
For parametric activation, please read the layer APIs.
The shortcut of tensorlayer.activation
is tensorlayer.act
.
Your activation¶
Customizes activation function in TensorLayer is very easy. The following example implements an activation that multiplies its input by 2. For more complex activation, TensorFlow API will be required.
def double_activation(x):
return x * 2
A file containing various activation functions.
identity (x) |
Identity activation function. |
ramp (x[, v_min, v_max, name]) |
Ramp activation function. |
leaky_relu (x[, alpha, name]) |
LeakyReLU, Shortcut is lrelu . |
swish (x[, name]) |
Swish function. |
sign (x) |
Sign function. |
hard_tanh (x[, name]) |
Hard tanh activation function. |
pixel_wise_softmax (x[, name]) |
Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1. |
Identity¶
-
tensorlayer.activation.
identity
(x)[source]¶ Identity activation function. (deprecated)
THIS FUNCTION IS DEPRECATED. It will be removed after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.identity can do the same thing.
Shortcut is
linear
.Parameters: x (Tensor) – input. Returns: A Tensor
in the same type asx
.Return type: Tensor
Ramp¶
-
tensorlayer.activation.
ramp
(x, v_min=0, v_max=1, name=None)[source]¶ Ramp activation function.
Parameters: - x (Tensor) – input.
- v_min (float) – cap input to v_min as a lower bound.
- v_max (float) – cap input to v_max as a upper bound.
- name (str) – The function name (optional).
Returns: A
Tensor
in the same type asx
.Return type: Tensor
Leaky Relu¶
-
tensorlayer.activation.
leaky_relu
(x, alpha=0.1, name='lrelu')[source]¶ LeakyReLU, Shortcut is
lrelu
.Modified version of ReLU, introducing a nonzero gradient for negative input.
Parameters: - x (Tensor) – Support input type
float
,double
,int32
,int64
,uint8
,int16
, orint8
. - alpha (float) – Slope.
- name (str) – The function name (optional).
Examples
>>> net = tl.layers.DenseLayer(net, 100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')
Returns: A Tensor
in the same type asx
.Return type: Tensor References
- Rectifier Nonlinearities Improve Neural Network Acoustic Models, Maas et al. (2013)
- http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
- x (Tensor) – Support input type
Swish¶
Sign¶
-
tensorlayer.activation.
sign
(x)[source]¶ Sign function.
Clip and binarize tensor using the straight through estimator (STE) for the gradient, usually be used for quantizing values in Binarized Neural Networks: https://arxiv.org/abs/1602.02830.
Parameters: x (Tensor) – input. Examples
>>> net = tl.layers.DenseLayer(net, 100, act=lambda x : tl.act.lrelu(x, 0.2), name='dense')
Returns: A Tensor
in the same type asx
.Return type: Tensor References
- Rectifier Nonlinearities Improve Neural Network Acoustic Models, Maas et al. (2013)
- http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
- BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1, Courbariaux et al. (2016) https://arxiv.org/abs/1602.02830
Hard Tanh¶
-
tensorlayer.activation.
hard_tanh
(x, name='htanh')[source]¶ Hard tanh activation function.
Which is a ramp function with low bound of -1 and upper bound of 1, shortcut is htanh.
Parameters: - x (Tensor) – input.
- name (str) – The function name (optional).
Returns: A
Tensor
in the same type asx
.Return type: Tensor
Pixel-wise softmax¶
-
tensorlayer.activation.
pixel_wise_softmax
(x, name='pixel_wise_softmax')[source]¶ Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1. (deprecated)
THIS FUNCTION IS DEPRECATED. It will be removed after 2018-06-30. Instructions for updating: This API will be deprecated soon as tf.nn.softmax can do the same thing.
Usually be used for image segmentation.
Parameters: - x (Tensor) –
- input.
- For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.
- For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.
- name (str) – function name (optional)
Returns: A
Tensor
in the same type asx
.Return type: Tensor
Examples
>>> outputs = pixel_wise_softmax(network.outputs) >>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)
References
- x (Tensor) –
Parametric activation¶
See tensorlayer.layers
.