API - Utility

fit(network, train_op, cost, X_train, y_train) Training a given non time-series network by the given cost function, training data, batch_size, n_epoch etc.
test(network, acc, X_test, y_test, batch_size) Test a given non time-series network by the given test data and metric.
predict(network, X[, batch_size]) Return the predict results of given non time-series network.
evaluation([y_test, y_predict, n_classes]) Input the predicted results, targets results and the number of class, return the confusion matrix, F1-score of each class, accuracy and macro F1-score.
class_balancing_oversample([X_train, …]) Input the features and labels, return the features and labels after oversampling.
get_random_int([min_v, max_v, number, seed]) Return a list of random integer by the given range and quantity.
dict_to_one(dp_dict) Input a dictionary, return a dictionary that all items are set to one.
list_string_to_dict(string) Inputs ['a', 'b', 'c'], returns {'a': 0, 'b': 1, 'c': 2}.
flatten_list(list_of_list) Input a list of list, return a list that all items are in a list.
exit_tensorflow([port]) Close TensorBoard and Nvidia-process if available.
open_tensorboard([log_dir, port]) Open Tensorboard.
clear_all_placeholder_variables([printable]) Clears all the placeholder variables of keep prob, including keeping probabilities of all dropout, denoising, dropconnect etc.
set_gpu_fraction([gpu_fraction]) Set the GPU memory fraction for the application.

Training, testing and predicting

Training

tensorlayer.utils.fit(network, train_op, cost, X_train, y_train, acc=None, batch_size=100, n_epoch=100, print_freq=5, X_val=None, y_val=None, eval_train=True, tensorboard_dir=None, tensorboard_epoch_freq=5, tensorboard_weight_histograms=True, tensorboard_graph_vis=True)[source]

Training a given non time-series network by the given cost function, training data, batch_size, n_epoch etc.

  • MNIST example click here.
  • In order to control the training details, the authors HIGHLY recommend tl.iterate see two MNIST examples 1, 2.
Parameters:
  • network (TensorLayer Model) – the network to be trained.
  • train_op (TensorFlow optimizer) – The optimizer for training e.g. tf.optimizers.Adam().
  • cost (TensorLayer or TensorFlow loss function) – Metric for loss function, e.g tl.cost.cross_entropy.
  • X_train (numpy.array) – The input of training data
  • y_train (numpy.array) – The target of training data
  • acc (TensorFlow/numpy expression or None) – Metric for accuracy or others. If None, would not print the information.
  • batch_size (int) – The batch size for training and evaluating.
  • n_epoch (int) – The number of training epochs.
  • print_freq (int) – Print the training information every print_freq epochs.
  • X_val (numpy.array or None) – The input of validation data. If None, would not perform validation.
  • y_val (numpy.array or None) – The target of validation data. If None, would not perform validation.
  • eval_train (boolean) – Whether to evaluate the model during training. If X_val and y_val are not None, it reflects whether to evaluate the model on training data.
  • tensorboard_dir (string) – path to log dir, if set, summary data will be stored to the tensorboard_dir/ directory for visualization with tensorboard. (default None)
  • tensorboard_epoch_freq (int) – How many epochs between storing tensorboard checkpoint for visualization to log/ directory (default 5).
  • tensorboard_weight_histograms (boolean) – If True updates tensorboard data in the logs/ directory for visualization of the weight histograms every tensorboard_epoch_freq epoch (default True).
  • tensorboard_graph_vis (boolean) – If True stores the graph in the tensorboard summaries saved to log/ (default True).

Examples

See tutorial_mnist_simple.py

>>> tl.utils.fit(network, train_op=tf.optimizers.Adam(learning_rate=0.0001),
...              cost=tl.cost.cross_entropy, X_train=X_train, y_train=y_train, acc=acc,
...              batch_size=64, n_epoch=20, _val=X_val, y_val=y_val, eval_train=True)
>>> tl.utils.fit(network, train_op, cost, X_train, y_train,
...            acc=acc, batch_size=500, n_epoch=200, print_freq=5,
...            X_val=X_val, y_val=y_val, eval_train=False, tensorboard=True)

Notes

‘tensorboard_weight_histograms’ and ‘tensorboard_weight_histograms’ are not supported now.

Evaluation

tensorlayer.utils.test(network, acc, X_test, y_test, batch_size, cost=None)[source]

Test a given non time-series network by the given test data and metric.

Parameters:
  • network (TensorLayer Model) – The network.
  • acc (TensorFlow/numpy expression or None) –
    Metric for accuracy or others.
    • If None, would not print the information.
  • X_test (numpy.array) – The input of testing data.
  • y_test (numpy array) – The target of testing data
  • batch_size (int or None) – The batch size for testing, when dataset is large, we should use minibatche for testing; if dataset is small, we can set it to None.
  • cost (TensorLayer or TensorFlow loss function) – Metric for loss function, e.g tl.cost.cross_entropy. If None, would not print the information.

Examples

See tutorial_mnist_simple.py

>>> def acc(_logits, y_batch):
...     return np.mean(np.equal(np.argmax(_logits, 1), y_batch))
>>> tl.utils.test(network, acc, X_test, y_test, batch_size=None, cost=tl.cost.cross_entropy)

Prediction

tensorlayer.utils.predict(network, X, batch_size=None)[source]

Return the predict results of given non time-series network.

Parameters:
  • network (TensorLayer Model) – The network.
  • X (numpy.array) – The inputs.
  • batch_size (int or None) – The batch size for prediction, when dataset is large, we should use minibatche for prediction; if dataset is small, we can set it to None.

Examples

See tutorial_mnist_simple.py

>>> _logits = tl.utils.predict(network, X_test)
>>> y_pred = np.argmax(_logits, 1)

Evaluation functions

tensorlayer.utils.evaluation(y_test=None, y_predict=None, n_classes=None)[source]

Input the predicted results, targets results and the number of class, return the confusion matrix, F1-score of each class, accuracy and macro F1-score.

Parameters:
  • y_test (list) – The target results
  • y_predict (list) – The predicted results
  • n_classes (int) – The number of classes

Examples

>>> c_mat, f1, acc, f1_macro = tl.utils.evaluation(y_test, y_predict, n_classes)

Class balancing functions

tensorlayer.utils.class_balancing_oversample(X_train=None, y_train=None, printable=True)[source]

Input the features and labels, return the features and labels after oversampling.

Parameters:
  • X_train (numpy.array) – The inputs.
  • y_train (numpy.array) – The targets.

Examples

One X

>>> X_train, y_train = class_balancing_oversample(X_train, y_train, printable=True)

Two X

>>> X, y = tl.utils.class_balancing_oversample(X_train=np.hstack((X1, X2)), y_train=y, printable=False)
>>> X1 = X[:, 0:5]
>>> X2 = X[:, 5:]

Random functions

tensorlayer.utils.get_random_int(min_v=0, max_v=10, number=5, seed=None)[source]

Return a list of random integer by the given range and quantity.

Parameters:
  • min_v (number) – The minimum value.
  • max_v (number) – The maximum value.
  • number (int) – Number of value.
  • seed (int or None) – The seed for random.

Examples

>>> r = get_random_int(min_v=0, max_v=10, number=5)
[10, 2, 3, 3, 7]

Dictionary and list

Set all items in dictionary to one

tensorlayer.utils.dict_to_one(dp_dict)[source]

Input a dictionary, return a dictionary that all items are set to one.

Used for disable dropout, dropconnect layer and so on.

Parameters:dp_dict (dictionary) – The dictionary contains key and number, e.g. keeping probabilities.

Convert list of string to dictionary

tensorlayer.utils.list_string_to_dict(string)[source]

Inputs ['a', 'b', 'c'], returns {'a': 0, 'b': 1, 'c': 2}.

Flatten a list

tensorlayer.utils.flatten_list(list_of_list)[source]

Input a list of list, return a list that all items are in a list.

Parameters:list_of_list (a list of list) –

Examples

>>> tl.utils.flatten_list([[1, 2, 3],[4, 5],[6]])
[1, 2, 3, 4, 5, 6]

Close TF session and associated processes

tensorlayer.utils.exit_tensorflow(port=6006)[source]

Close TensorBoard and Nvidia-process if available.

Parameters:port (int) – TensorBoard port you want to close, 6006 as default.

Open TensorBoard

tensorlayer.utils.open_tensorboard(log_dir='/tmp/tensorflow', port=6006)[source]

Open Tensorboard.

Parameters:
  • log_dir (str) – Directory where your tensorboard logs are saved
  • port (int) – TensorBoard port you want to open, 6006 is tensorboard default

Clear TensorFlow placeholder

tensorlayer.utils.clear_all_placeholder_variables(printable=True)[source]

Clears all the placeholder variables of keep prob, including keeping probabilities of all dropout, denoising, dropconnect etc.

Parameters:printable (boolean) – If True, print all deleted variables.

Set GPU functions

tensorlayer.utils.set_gpu_fraction(gpu_fraction=0.3)[source]

Set the GPU memory fraction for the application.

Parameters:gpu_fraction (None or float) – Fraction of GPU memory, (0 ~ 1]. If None, allow gpu memory growth.

References