API - Models

TensorLayer provides many pretrained models, you can easily use the whole or a part of the pretrained models via these APIs.

VGG16(x[, end_with, reuse]) Pre-trained VGG-16 model.
VGG19(x[, end_with, reuse]) Pre-trained VGG-19 model.
SqueezeNetV1(x[, end_with, is_train, reuse]) Pre-trained SqueezeNetV1 model.
MobileNetV1(x[, end_with, is_train, reuse]) Pre-trained MobileNetV1 model.

VGG16

class tensorlayer.models.VGG16(x, end_with='fc3_relu', reuse=None)[source]

Pre-trained VGG-16 model.

Parameters:
  • x (placeholder) – shape [None, 224, 224, 3], value range [0, 1].
  • end_with (str) – The end point of the model. Default fc3_relu i.e. the whole model.
  • reuse (boolean) – Whether to reuse the model.

Examples

Classify ImageNet classes with VGG16, see tutorial_models_vgg16.py

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> vgg = tl.models.VGG16(x)
>>> # restore pre-trained VGG parameters
>>> sess = tf.InteractiveSession()
>>> vgg.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(vgg.outputs)

Extract features with VGG16 and Train a classifier with 100 classes

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get VGG without the last layer
>>> vgg = tl.models.VGG16(x, end_with='fc2_relu')
>>> # add one more layer
>>> net = tl.layers.DenseLayer(vgg, 100, name='out')
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained VGG parameters
>>> vgg.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('out')

Reuse model

>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get VGG without the last layer
>>> vgg1 = tl.models.VGG16(x1, end_with='fc2_relu')
>>> # reuse the parameters of vgg1 with different input
>>> vgg2 = tl.models.VGG16(x2, end_with='fc2_relu', reuse=True)
>>> # restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
>>> sess = tf.InteractiveSession()
>>> vgg1.restore_params(sess)

VGG19

class tensorlayer.models.VGG19(x, end_with='fc3_relu', reuse=None)[source]

Pre-trained VGG-19 model.

Parameters:
  • x (placeholder) – shape [None, 224, 224, 3], value range [0, 1].
  • end_with (str) – The end point of the model. Default fc3_relu i.e. the whole model.
  • reuse (boolean) – Whether to reuse the model.

Examples

Classify ImageNet classes with VGG19, see tutorial_models_vgg19.py

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> vgg = tl.models.VGG19(x)
>>> # restore pre-trained VGG parameters
>>> sess = tf.InteractiveSession()
>>> vgg.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(vgg.outputs)

Extract features with VGG19 and Train a classifier with 100 classes

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get VGG without the last layer
>>> vgg = tl.models.VGG19(x, end_with='fc2_relu')
>>> # add one more layer
>>> net = tl.layers.DenseLayer(vgg, 100, name='out')
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained VGG parameters
>>> vgg.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('out')

Reuse model

>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get VGG without the last layer
>>> vgg1 = tl.models.VGG19(x1, end_with='fc2_relu')
>>> # reuse the parameters of vgg1 with different input
>>> vgg2 = tl.models.VGG19(x2, end_with='fc2_relu', reuse=True)
>>> # restore pre-trained VGG parameters (as they share parameters, we don’t need to restore vgg2)
>>> sess = tf.InteractiveSession()
>>> vgg1.restore_params(sess)

SqueezeNetV1

class tensorlayer.models.SqueezeNetV1(x, end_with='output', is_train=False, reuse=None)[source]

Pre-trained SqueezeNetV1 model.

Parameters:
  • x (placeholder) – shape [None, 224, 224, 3], value range [0, 255].
  • end_with (str) – The end point of the model [input, fire2, fire3 … fire9, output]. Default output i.e. the whole model.
  • is_train (boolean) – Whether the model is used for training i.e. enable dropout.
  • reuse (boolean) – Whether to reuse the model.

Examples

Classify ImageNet classes, see tutorial_models_squeezenetv1.py

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> net = tl.models.SqueezeNetV1(x)
>>> # restore pre-trained parameters
>>> sess = tf.InteractiveSession()
>>> net.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(net.outputs)

Extract features and Train a classifier with 100 classes

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> cnn = tl.models.SqueezeNetV1(x, end_with='fire9')
>>> # add one more layer
>>> net = Conv2d(cnn, 100, (1, 1), (1, 1), padding='VALID', name='output')
>>> net = GlobalMeanPool2d(net)
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained parameters
>>> cnn.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('output')

Reuse model

>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> net1 = tl.models.SqueezeNetV1(x1, end_with='fire9')
>>> # reuse the parameters with different input
>>> net2 = tl.models.SqueezeNetV1(x2, end_with='fire9', reuse=True)
>>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
>>> sess = tf.InteractiveSession()
>>> net1.restore_params(sess)

MobileNetV1

class tensorlayer.models.MobileNetV1(x, end_with='out', is_train=False, reuse=None)[source]

Pre-trained MobileNetV1 model.

Parameters:
  • x (placeholder) – shape [None, 224, 224, 3], value range [0, 1].
  • end_with (str) – The end point of the model [conv, depth1, depth2 … depth13, globalmeanpool, out]. Default out i.e. the whole model.
  • is_train (boolean) – Whether the model is used for training i.e. enable dropout.
  • reuse (boolean) – Whether to reuse the model.

Examples

Classify ImageNet classes, see tutorial_models_mobilenetv1.py

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get the whole model
>>> net = tl.models.MobileNetV1(x)
>>> # restore pre-trained parameters
>>> sess = tf.InteractiveSession()
>>> net.restore_params(sess)
>>> # use for inferencing
>>> probs = tf.nn.softmax(net.outputs)

Extract features and Train a classifier with 100 classes

>>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
>>> # add one more layer
>>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
>>> net = FlattenLayer(net, name='flatten')
>>> # initialize all parameters
>>> sess = tf.InteractiveSession()
>>> tl.layers.initialize_global_variables(sess)
>>> # restore pre-trained parameters
>>> cnn.restore_params(sess)
>>> # train your own classifier (only update the last layer)
>>> train_params = tl.layers.get_variables_with_name('out')

Reuse model

>>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
>>> # get model without the last layer
>>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
>>> # reuse the parameters with different input
>>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
>>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
>>> sess = tf.InteractiveSession()
>>> net1.restore_params(sess)