examples/pytorch/README.md
This folder contains actively maintained examples of use of 🤗 Transformers using the PyTorch backend, organized by ML task.
Here is the list of all our examples:
Trainer (if not, they still work, they might
just lack some features),| Task | Example datasets | Trainer support | 🤗 Accelerate | 🤗 Datasets | Colab |
|---|---|---|---|---|---|
language-modeling | WikiText-2 | ✅ | ✅ | ✅ | |
multiple-choice | SWAG | ✅ | ✅ | ✅ | |
question-answering | SQuAD | ✅ | ✅ | ✅ | |
summarization | XSum | ✅ | ✅ | ✅ | |
text-classification | GLUE | ✅ | ✅ | ✅ | |
text-generation | - | n/a | - | - | |
token-classification | CoNLL NER | ✅ | ✅ | ✅ | |
translation | WMT | ✅ | ✅ | ✅ | |
speech-recognition | TIMIT | ✅ | - | ✅ | |
multi-lingual speech-recognition | Common Voice | ✅ | - | ✅ | |
audio-classification | SUPERB KS | ✅ | - | ✅ | |
image-pretraining | ImageNet-1k | ✅ | - | ✅ | / |
image-classification | CIFAR-10 | ✅ | ✅ | ✅ | |
semantic-segmentation | SCENE_PARSE_150 | ✅ | ✅ | ✅ | |
object-detection | CPPE-5 | ✅ | ✅ | ✅ | |
instance-segmentation | ADE20K sample | ✅ | ✅ | ✅ |
Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.
For example here is how to truncate all three splits to just 50 samples each:
examples/pytorch/token-classification/run_ner.py \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
[...]
Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a -h option, e.g.:
token-classification/run_ner.py -h
You can resume training from a previous checkpoint like this:
--resume_from_checkpoint path_to_a_specific_checkpoint to resume training from that checkpoint folder.Should you want to turn an example into a notebook where you'd no longer have access to the command
line, 🤗 Trainer supports resuming from a checkpoint via trainer.train(resume_from_checkpoint).
resume_from_checkpoint is True it will look for the last checkpoint in the value of output_dir passed via TrainingArguments.resume_from_checkpoint is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.All the example scripts support automatic upload of your final model to the Model Hub by adding a --push_to_hub argument. It will then create a repository with your username slash the name of the folder you are using as output_dir. For instance, "sgugger/test-mrpc" if your username is sgugger and you are working in the folder ~/tmp/test-mrpc.
To specify a given repository name, use the --hub_model_id argument. You will need to specify the whole repository name (including your username), for instance --hub_model_id sgugger/finetuned-bert-mrpc. To upload to an organization you are a member of, just use the name of that organization instead of your username: --hub_model_id huggingface/finetuned-bert-mrpc.
A few notes on this integration:
hf auth login and then type your username and password when prompted. You can also pass along your authentication token with the --hub_token argument.output_dir you pick will either need to be a new folder or a local clone of the distant repository you are using.All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUs, use the following command:
torchrun \
--nproc_per_node number_of_gpu_you_have path_to_script.py \
--all_arguments_of_the_script
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue script, with 8 GPUs:
torchrun \
--nproc_per_node 8 text-classification/run_glue.py \
--model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/
If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest. Just add the flag --fp16 to your command launching one of the scripts mentioned above!
Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).
When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.
When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed pytorch/xla README.
In this repo, we provide a very simple launcher script named
xla_spawn.py that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your
regular training script with its arguments (this is similar to the torch.distributed.launch helper for
torch.distributed):
python xla_spawn.py --num_cores num_tpu_you_have \
path_to_script.py \
--all_arguments_of_the_script
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue script, with 8 TPUs (from this folder):
python xla_spawn.py --num_cores 8 \
text-classification/run_glue.py \
--model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/
Most PyTorch example scripts have a version using the 🤗 Accelerate library
that exposes the training loop so it's easy for you to customize or tweak them to your needs. They all require you to
install accelerate with the latest development version
pip install git+https://github.com/huggingface/accelerate
Then you can easily launch any of the scripts by running
accelerate config
and reply to the questions asked. Then
accelerate test
that will check everything is ready for training. Finally, you can launch training with
accelerate launch path_to_script.py --args_to_script
You can easily log and monitor your runs code. The following are currently supported:
To use Weights & Biases, install the wandb package with:
pip install wandb
Then log in the command line:
wandb login
If you are in Jupyter or Colab, you should login with:
import wandb
wandb.login()
To enable logging to W&B, include "wandb" in the report_to of your TrainingArguments or script. Or just pass along --report_to_all if you have wandb installed.
Whenever you use the Trainer class, your losses, evaluation metrics, model topology and gradients will automatically be logged.
Advanced configuration is possible by setting environment variables:
| Environment Variable | Value |
|---|---|
| WANDB_LOG_MODEL | Log the model as artifact (log the model as artifact at the end of training) (false by default) |
| WANDB_WATCH | one of gradients (default) to log histograms of gradients, all to log histograms of both gradients and parameters, or false for no histogram logging |
| WANDB_PROJECT | Organize runs by project |
Set run names with run_name argument present in scripts or as part of TrainingArguments.
Additional configuration options are available through generic wandb environment variables.
Refer to related documentation & examples.
To use comet_ml, install the Python package with:
pip install comet_ml
or if in a Conda environment:
conda install -c comet_ml -c anaconda -c conda-forge comet_ml
To use ClearML, install the clearml package with:
pip install clearml
Then create new credentials from the ClearML Server. You can get a free hosted server here or self-host your own! After creating your new credentials, you can either copy the local snippet which you can paste after running:
clearml-init
Or you can copy the jupyter snippet if you are in Jupyter or Colab:
%env CLEARML_WEB_HOST=https://app.clear.ml
%env CLEARML_API_HOST=https://api.clear.ml
%env CLEARML_FILES_HOST=https://files.clear.ml
%env CLEARML_API_ACCESS_KEY=***
%env CLEARML_API_SECRET_KEY=***
To enable logging to ClearML, include "clearml" in the report_to of your TrainingArguments or script. Or just pass along --report_to all if you have clearml already installed.
Advanced configuration is possible by setting environment variables:
| Environment Variable | Value |
|---|---|
| CLEARML_PROJECT | Name of the project in ClearML. (default: "HuggingFace Transformers") |
| CLEARML_TASK | Name of the task in ClearML. (default: "Trainer") |
Additional configuration options are available through generic clearml environment variables.