CLI - Command Line Interface

The tensorlayer.cli module provides a command-line tool for some common tasks.

tl train

(Alpha release - usage might change later)

The tensorlayer.cli.train module provides the tl train subcommand. It helps the user bootstrap a TensorFlow/TensorLayer program for distributed training using multiple GPU cards or CPUs on a computer.

You need to first setup the CUDA_VISIBLE_DEVICES to tell tl train which GPUs are available. If the CUDA_VISIBLE_DEVICES is not given, tl train would try best to discover all available GPUs.

In distribute training, each TensorFlow program needs a TF_CONFIG environment variable to describe the cluster. It also needs a master daemon to monitor all trainers. tl train is responsible for automatically managing these two tasks.

Usage

tl train [-h] [-p NUM_PSS] [-c CPU_TRAINERS] <file> [args [args …]]

# example of using GPU 0 and 1 for training mnist
CUDA_VISIBLE_DEVICES="0,1"
tl train example/tutorial_mnist_distributed.py

# example of using CPU trainers for inception v3
tl train -c 16 example/tutorial_imagenet_inceptionV3_distributed.py

# example of using GPU trainers for inception v3 with customized arguments
# as CUDA_VISIBLE_DEVICES is not given, tl would try to discover all available GPUs
tl train example/tutorial_imagenet_inceptionV3_distributed.py -- --batch_size 16

Command-line Arguments

  • file: python file path.

  • NUM_PSS : The number of parameter servers.

  • CPU_TRAINERS: The number of CPU trainers.

    It is recommended that NUM_PSS + CPU_TRAINERS <= cpu count

  • args: Any parameter after -- would be passed to the python program.

Notes

A parallel training program would require multiple parameter servers to help parallel trainers to exchange intermediate gradients. The best number of parameter servers is often proportional to the size of your model as well as the number of CPUs available. You can control the number of parameter servers using the -p parameter.

If you have a single computer with massive CPUs, you can use the -c parameter to enable CPU-only parallel training. The reason we are not supporting GPU-CPU co-training is because GPU and CPU are running at different speeds. Using them together in training would incur stragglers.