Back to Transformers

Examples

examples/pytorch/README.md

5.8.016.5 KB
Original Source
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->

Examples

This folder contains actively maintained examples of use of 🤗 Transformers using the PyTorch backend, organized by ML task.

The Big Table of Tasks

Here is the list of all our examples:

  • with information on whether they are built on top of Trainer (if not, they still work, they might just lack some features),
  • whether or not they have a version using the 🤗 Accelerate library.
  • whether or not they leverage the 🤗 Datasets library.
  • links to Colab notebooks to walk through the scripts and run them easily,
<!-- Coming soon! - links to **Cloud deployments** to be able to deploy large-scale trainings in the Cloud with little to no setup. -->
TaskExample datasetsTrainer support🤗 Accelerate🤗 DatasetsColab
language-modelingWikiText-2
multiple-choiceSWAG
question-answeringSQuAD
summarizationXSum
text-classificationGLUE
text-generation-n/a--
token-classificationCoNLL NER
translationWMT
speech-recognitionTIMIT-
multi-lingual speech-recognitionCommon Voice-
audio-classificationSUPERB KS-
image-pretrainingImageNet-1k-/
image-classificationCIFAR-10
semantic-segmentationSCENE_PARSE_150
object-detectionCPPE-5
instance-segmentationADE20K sample

Running quick tests

Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.

For example here is how to truncate all three splits to just 50 samples each:

bash
examples/pytorch/token-classification/run_ner.py \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
[...]

Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a -h option, e.g.:

bash
token-classification/run_ner.py -h

Resuming training

You can resume training from a previous checkpoint like this:

  1. Pass --resume_from_checkpoint path_to_a_specific_checkpoint to resume training from that checkpoint folder.

Should you want to turn an example into a notebook where you'd no longer have access to the command line, 🤗 Trainer supports resuming from a checkpoint via trainer.train(resume_from_checkpoint).

  1. If resume_from_checkpoint is True it will look for the last checkpoint in the value of output_dir passed via TrainingArguments.
  2. If resume_from_checkpoint is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.

Upload the trained/fine-tuned model to the Hub

All the example scripts support automatic upload of your final model to the Model Hub by adding a --push_to_hub argument. It will then create a repository with your username slash the name of the folder you are using as output_dir. For instance, "sgugger/test-mrpc" if your username is sgugger and you are working in the folder ~/tmp/test-mrpc.

To specify a given repository name, use the --hub_model_id argument. You will need to specify the whole repository name (including your username), for instance --hub_model_id sgugger/finetuned-bert-mrpc. To upload to an organization you are a member of, just use the name of that organization instead of your username: --hub_model_id huggingface/finetuned-bert-mrpc.

A few notes on this integration:

  • you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run hf auth login and then type your username and password when prompted. You can also pass along your authentication token with the --hub_token argument.
  • the output_dir you pick will either need to be a new folder or a local clone of the distant repository you are using.

Distributed training and mixed precision

All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUs, use the following command:

bash
torchrun \
    --nproc_per_node number_of_gpu_you_have path_to_script.py \
	--all_arguments_of_the_script

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 GPUs:

bash
torchrun \
    --nproc_per_node 8 text-classification/run_glue.py \
    --model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision training with PyTorch 1.6.0 or latest. Just add the flag --fp16 to your command launching one of the scripts mentioned above!

Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).

Running on TPUs

When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy.

When using PyTorch, we support TPUs thanks to pytorch/xla. For more context and information on how to setup your TPU environment refer to Google's documentation and to the very detailed pytorch/xla README.

In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch helper for torch.distributed):

bash
python xla_spawn.py --num_cores num_tpu_you_have \
    path_to_script.py \
	--all_arguments_of_the_script

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text classification MNLI task using the run_glue script, with 8 TPUs (from this folder):

bash
python xla_spawn.py --num_cores 8 \
    text-classification/run_glue.py \
    --model_name_or_path google-bert/bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/

Using Accelerate

Most PyTorch example scripts have a version using the 🤗 Accelerate library that exposes the training loop so it's easy for you to customize or tweak them to your needs. They all require you to install accelerate with the latest development version

bash
pip install git+https://github.com/huggingface/accelerate

Then you can easily launch any of the scripts by running

bash
accelerate config

and reply to the questions asked. Then

bash
accelerate test

that will check everything is ready for training. Finally, you can launch training with

bash
accelerate launch path_to_script.py --args_to_script

Logging & Experiment tracking

You can easily log and monitor your runs code. The following are currently supported:

Weights & Biases

To use Weights & Biases, install the wandb package with:

bash
pip install wandb

Then log in the command line:

bash
wandb login

If you are in Jupyter or Colab, you should login with:

python
import wandb
wandb.login()

To enable logging to W&B, include "wandb" in the report_to of your TrainingArguments or script. Or just pass along --report_to_all if you have wandb installed.

Whenever you use the Trainer class, your losses, evaluation metrics, model topology and gradients will automatically be logged.

Advanced configuration is possible by setting environment variables:

Environment VariableValue
WANDB_LOG_MODELLog the model as artifact (log the model as artifact at the end of training) (false by default)
WANDB_WATCHone of gradients (default) to log histograms of gradients, all to log histograms of both gradients and parameters, or false for no histogram logging
WANDB_PROJECTOrganize runs by project

Set run names with run_name argument present in scripts or as part of TrainingArguments.

Additional configuration options are available through generic wandb environment variables.

Refer to related documentation & examples.

Comet

To use comet_ml, install the Python package with:

bash
pip install comet_ml

or if in a Conda environment:

bash
conda install -c comet_ml -c anaconda -c conda-forge comet_ml

ClearML

To use ClearML, install the clearml package with:

bash
pip install clearml

Then create new credentials from the ClearML Server. You can get a free hosted server here or self-host your own! After creating your new credentials, you can either copy the local snippet which you can paste after running:

bash
clearml-init

Or you can copy the jupyter snippet if you are in Jupyter or Colab:

python
%env CLEARML_WEB_HOST=https://app.clear.ml
%env CLEARML_API_HOST=https://api.clear.ml
%env CLEARML_FILES_HOST=https://files.clear.ml
%env CLEARML_API_ACCESS_KEY=***
%env CLEARML_API_SECRET_KEY=***

To enable logging to ClearML, include "clearml" in the report_to of your TrainingArguments or script. Or just pass along --report_to all if you have clearml already installed.

Advanced configuration is possible by setting environment variables:

Environment VariableValue
CLEARML_PROJECTName of the project in ClearML. (default: "HuggingFace Transformers")
CLEARML_TASKName of the task in ClearML. (default: "Trainer")

Additional configuration options are available through generic clearml environment variables.