API - Load, Save Model and Data

Load benchmark dataset, save and restore model, save and load variables. TensorFlow provides .ckpt file format to save and restore the models, while we suggest to use standard python file format .npz to save models for the sake of cross-platform.

# save model as .ckpt
saver = tf.train.Saver()
save_path = saver.save(sess, "model.ckpt")
# restore model from .ckpt
saver = tf.train.Saver()
saver.restore(sess, "model.ckpt")

# save model as .npz
tl.files.save_npz(network.all_params , name='model.npz')

# restore model from .npz
load_params = tl.files.load_npz(path='', name='model.npz')
tl.files.assign_params(sess, load_params, network)

# you can assign the pre-trained parameters as follow
# 1st parameter
tl.files.assign_params(sess, [load_params[0]], network)
# the first three parameters
tl.files.assign_params(sess, load_params[:3], network)
load_mnist_dataset([shape]) Automatically download MNIST dataset and return the training, validation and test set with 50000, 10000 and 10000 digit images respectively.
load_cifar10_dataset([shape, plotable, second]) The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.
load_ptb_dataset() Penn TreeBank (PTB) dataset is used in many LANGUAGE MODELING papers, including “Empirical Evaluation and Combination of Advanced Language Modeling Techniques”, “Recurrent Neural Network Regularization”.
load_matt_mahoney_text8_dataset() Download a text file from Matt Mahoney’s website if not present, and make sure it’s the right size.
load_imbd_dataset([path, nb_words, …]) Load IMDB dataset
load_nietzsche_dataset() Load Nietzsche dataset.
load_wmt_en_fr_dataset([data_dir]) It will download English-to-French translation data from the WMT‘15 Website (10^9-French-English corpus), and the 2013 news test from the same site as development set.
save_npz([save_list, name]) Input parameters and the file name, save parameters into .npz file.
load_npz([path, name]) Load the parameters of a Model saved by tl.files.save_npz().
assign_params(sess, params, network) Assign the given parameters to the TensorLayer network.
save_any_to_npy([save_dict, name]) Save variables to .npy file.
load_npy_to_any([path, name]) Load .npy file.
npz_to_W_pdf([path, regx]) Convert the first weight matrix of .npz file to .pdf by using tl.visualize.W().
load_file_list([path, regx]) Return a file list in a folder by given a path and regular expression.

Load dataset functions

MNIST

tensorlayer.files.load_mnist_dataset(shape=(-1, 784))[source]

Automatically download MNIST dataset and return the training, validation and test set with 50000, 10000 and 10000 digit images respectively.

Parameters:
shape : tuple

The shape of digit images

Examples

>>> X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1,784))
>>> X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))

CIFAR-10

tensorlayer.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False, second=3)[source]

The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.

Parameters:
shape : tupe

The shape of digit images: e.g. (-1, 3, 32, 32) , (-1, 32, 32, 3) , (-1, 32*32*3)

plotable : True, False

Whether to plot some image examples.

second : int

If plotable is True, second is the display time.

References

CIFAR website

Data download link

Code references

Examples

>>> X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=True)

Penn TreeBank (PTB)

tensorlayer.files.load_ptb_dataset()[source]

Penn TreeBank (PTB) dataset is used in many LANGUAGE MODELING papers, including “Empirical Evaluation and Combination of Advanced Language Modeling Techniques”, “Recurrent Neural Network Regularization”.

It consists of 929k training words, 73k validation words, and 82k test words. It has 10k words in its vocabulary.

In “Recurrent Neural Network Regularization”, they trained regularized LSTMs of two sizes; these are denoted the medium LSTM and large LSTM. Both LSTMs have two layers and are unrolled for 35 steps. They initialize the hidden states to zero. They then use the final hidden states of the current minibatch as the initial hidden state of the subsequent minibatch (successive minibatches sequentially traverse the training set). The size of each minibatch is 20.

The medium LSTM has 650 units per layer and its parameters are initialized uniformly in [−0.05, 0.05]. They apply 50% dropout on the non-recurrent connections. They train the LSTM for 39 epochs with a learning rate of 1, and after 6 epochs they decrease it by a factor of 1.2 after each epoch. They clip the norm of the gradients (normalized by minibatch size) at 5.

The large LSTM has 1500 units per layer and its parameters are initialized uniformly in [−0.04, 0.04]. We apply 65% dropout on the non-recurrent connections. They train the model for 55 epochs with a learning rate of 1; after 14 epochs they start to reduce the learning rate by a factor of 1.15 after each epoch. They clip the norm of the gradients (normalized by minibatch size) at 10.

Returns:
train_data, valid_data, test_data, vocabulary size

Examples

>>> train_data, valid_data, test_data, vocab_size = tl.files.load_ptb_dataset()

Matt Mahoney’s text8

tensorlayer.files.load_matt_mahoney_text8_dataset()[source]

Download a text file from Matt Mahoney’s website if not present, and make sure it’s the right size. Extract the first file enclosed in a zip file as a list of words. This dataset can be used for Word Embedding.

Returns:
word_list : a list

a list of string (word).

e.g. […. ‘their’, ‘families’, ‘who’, ‘were’, ‘expelled’, ‘from’, ‘jerusalem’, …]

IMBD

tensorlayer.files.load_imbd_dataset(path='imdb.pkl', nb_words=None, skip_top=0, maxlen=None, test_split=0.2, seed=113, start_char=1, oov_char=2, index_from=3)[source]

Load IMDB dataset

References

Modify from keras.

Examples

>>> X_train, y_train, X_test, y_test = tl.files.load_imbd_dataset(
...                                 nb_words=20000, test_split=0.2)
>>> print('X_train.shape', X_train.shape)
... (20000,)  [[1, 62, 74, ... 1033, 507, 27],[1, 60, 33, ... 13, 1053, 7]..]
>>> print('y_train.shape', y_train.shape)
... (20000,)  [1 0 0 ..., 1 0 1]

Nietzsche

tensorlayer.files.load_nietzsche_dataset()[source]

Load Nietzsche dataset. Returns a string.

Examples

>>> see tutorial_generate_text.py
>>> words = tl.files.load_nietzsche_dataset()
>>> words = basic_clean_str(words)
>>> words = words.split()

English-to-French translation data from the WMT‘15 Website

tensorlayer.files.load_wmt_en_fr_dataset(data_dir='wmt')[source]

It will download English-to-French translation data from the WMT‘15 Website (10^9-French-English corpus), and the 2013 news test from the same site as development set. Returns the directories of training data and test data.

Parameters:
data_dir : a string

The directory to store the dataset.

References

Code modified from /tensorflow/models/rnn/translation/data_utils.py

Load and save network

Save network as .npz

tensorlayer.files.save_npz(save_list=[], name='model.npz')[source]

Input parameters and the file name, save parameters into .npz file. Use tl.utils.load_npz() to restore.

Parameters:
save_list : a list

Parameters want to be saved.

name : a string or None

The name of the .npz file.

References

Saving dictionary using numpy

Examples

>>> tl.files.save_npz(network.all_params, name='model_test.npz')
... File saved to: model_test.npz
>>> load_params = tl.files.load_npz(name='model_test.npz')
... Loading param0, (784, 800)
... Loading param1, (800,)
... Loading param2, (800, 800)
... Loading param3, (800,)
... Loading param4, (800, 10)
... Loading param5, (10,)
>>> put parameters into a TensorLayer network, please see assign_params()

Load network from .npz

tensorlayer.files.load_npz(path='', name='model.npz')[source]

Load the parameters of a Model saved by tl.files.save_npz().

Parameters:
path : a string

Folder path to .npz file.

name : a string or None

The name of the .npz file.

References

Saving dictionary using numpy

Examples

See save_npz and assign_params

tensorlayer.files.assign_params(sess, params, network)[source]

Assign the given parameters to the TensorLayer network.

Parameters:
sess : TensorFlow Session
params : a list

A list of parameters in order.

network : a Layer class

The network to be assigned

References

Assign value to a TensorFlow variable

Examples

>>> Save your network as follow:
>>> tl.files.save_npz(network.all_params, name='model_test.npz')
>>> network.print_params()
...
... Next time, load and assign your network as follow:
>>> sess.run(tf.initialize_all_variables()) # re-initialize, then save and assign
>>> load_params = tl.files.load_npz(name='model_test.npz')
>>> tl.files.assign_params(sess, load_params, network)
>>> network.print_params()

Load and save variables

Save variables as .npy

tensorlayer.files.save_any_to_npy(save_dict={}, name='any.npy')[source]

Save variables to .npy file.

Examples

>>> tl.files.save_any_to_npy(save_dict={'data': ['a','b']}, name='test.npy')
>>> data = tl.files.load_npy_to_any(name='test.npy')
>>> print(data)
... {'data': ['a','b']}

Load variables from .npy

tensorlayer.files.load_npy_to_any(path='', name='any.npy')[source]

Load .npy file.

Examples

see save_any_to_npy()

Visualizing npz file

tensorlayer.files.npz_to_W_pdf(path=None, regx='w1pre_[0-9]+\\.(npz)')[source]

Convert the first weight matrix of .npz file to .pdf by using tl.visualize.W().

Parameters:
path : a string or None

A folder path to npz files.

regx : a string

Regx for the file name.

Examples

>>> Convert the first weight matrix of w1_pre...npz file to w1_pre...pdf.
>>> tl.files.npz_to_W_pdf(path='/Users/.../npz_file/', regx='w1pre_[0-9]+\.(npz)')

Helper functions

tensorlayer.files.load_file_list(path=None, regx='\\.npz')[source]

Return a file list in a folder by given a path and regular expression.

Parameters:
path : a string or None

A folder path.

regx : a string

The regx of file name.

Examples

>>> file_list = tl.files.load_file_list(path=None, regx='w1pre_[0-9]+\.(npz)')