Back to Insightface

Distributed Arcface Training in Pytorch

recognition/arcface_torch/README.md

0.717.7 KB
Original Source

Distributed Arcface Training in Pytorch

The "arcface_torch" repository is the official implementation of the ArcFace algorithm. It supports distributed and sparse training with multiple distributed training examples, including several memory-saving techniques such as mixed precision training and gradient checkpointing. It also supports training for ViT models and datasets including WebFace42M and Glint360K, two of the largest open-source datasets. Additionally, the repository comes with a built-in tool for converting to ONNX format, making it easy to submit to MFR evaluation systems.




Requirements

To avail the latest features of PyTorch, we have upgraded to version 1.12.0.

How to Training

To train a model, execute the train_v2.py script with the path to the configuration files. The sample commands provided below demonstrate the process of conducting distributed training.

1. To run on one GPU:

shell
python train_v2.py configs/ms1mv3_r50_onegpu

Note:
It is not recommended to use a single GPU for training, as this may result in longer training times and suboptimal performance. For best results, we suggest using multiple GPUs or a GPU cluster.

2. To run on a machine with 8 GPUs:

shell
torchrun --nproc_per_node=8 train_v2.py configs/ms1mv3_r50

3. To run on 2 machines with 8 GPUs each:

Node 0:

shell
torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=12581 train_v2.py configs/wf42m_pfc02_16gpus_r100

Node 1:

shell
torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=12581 train_v2.py configs/wf42m_pfc02_16gpus_r100

4. Run ViT-B on a machine with 24k batchsize:

shell
torchrun --nproc_per_node=8 train_v2.py configs/wf42m_pfc03_40epoch_8gpu_vit_b

Download Datasets or Prepare Datasets

Note: If you want to use DALI for data reading, please use the script 'scripts/shuffle_rec.py' to shuffle the InsightFace style rec before using it.
Example:

python scripts/shuffle_rec.py ms1m-retinaface-t1

You will get the "shuffled_ms1m-retinaface-t1" folder, where the samples in the "train.rec" file are shuffled.

Model Zoo

  • The models are available for non-commercial research purposes only.
  • All models can be found in here.
  • Baidu Yun Pan: e8pw
  • OneDrive

Performance on IJB-C and ICCV2021-MFR

ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities. As the result, we can evaluate the FAIR performance for different algorithms.

For ICCV2021-MFR-ALL set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The globalised multi-racial testset contains 242,143 identities and 1,624,305 images.

1. Training on Single-Host GPU

DatasetsBackboneMFR-ALLIJB-C(1E-4)IJB-C(1E-5)log
MS1MV2mobilefacenet-0.45G62.0793.6190.28click me
MS1MV2r5075.1395.9794.07click me
MS1MV2r10078.1296.3794.27click me
MS1MV3mobilefacenet-0.45G63.7894.2391.33click me
MS1MV3r5079.1496.3794.47click me
MS1MV3r10081.9796.8595.02click me
Glint360Kmobilefacenet-0.45G70.1895.0492.62click me
Glint360Kr5086.3497.1695.81click me
Glint360kr10089.5297.5596.38click me
WF4Mr10089.8797.1995.48click me
WF12M-PFC-0.2r10094.7597.6095.90click me
WF12M-PFC-0.3r10094.7197.6496.01click me
WF12Mr10094.6997.5995.97click me
WF42M-PFC-0.2r10096.2797.7096.31click me
WF42M-PFC-0.2ViT-T-1.5G92.0497.2795.68click me
WF42M-PFC-0.3ViT-B-11G97.1697.9197.05click me

2. Training on Multi-Host GPU

DatasetsBackbone(bs*gpus)MFR-ALLIJB-C(1E-4)IJB-C(1E-5)Throughoutlog
WF42M-PFC-0.2r50(512*8)93.8397.5396.16~5900click me
WF42M-PFC-0.2r50(512*16)93.9697.4696.12~11000click me
WF42M-PFC-0.2r50(128*32)94.0497.4895.94~17000click me
WF42M-PFC-0.2r100(128*16)96.2897.8096.57~5200click me
WF42M-PFC-0.2r100(256*16)96.6997.8596.63~5200click me
WF42M-PFC-0.0018r100(512*32)93.0897.5195.88~10000click me
WF42M-PFC-0.2r100(128*32)96.5797.8396.50~9800click me

r100(128*32) means backbone is r100, batchsize per gpu is 128, the number of gpus is 32.

3. ViT For Face Recognition

DatasetsBackbone(bs)FLOPsMFR-ALLIJB-C(1E-4)IJB-C(1E-5)Throughoutlog
WF42M-PFC-0.3r18(128*32)2.679.1395.7793.36-click me
WF42M-PFC-0.3r50(128*32)6.394.0397.4895.94-click me
WF42M-PFC-0.3r100(128*32)12.196.6997.8296.45-click me
WF42M-PFC-0.3r200(128*32)23.597.7097.9796.93-click me
WF42M-PFC-0.3VIT-T(384*64)1.592.2497.3195.97~35000click me
WF42M-PFC-0.3VIT-S(384*64)5.795.8797.7396.57~25000click me
WF42M-PFC-0.3VIT-B(384*64)11.497.4297.9097.04~13800click me
WF42M-PFC-0.3VIT-L(384*64)25.397.8598.0097.23~9406click me

WF42M means WebFace42M, PFC-0.3 means negivate class centers sample rate is 0.3.

4. Noisy Datasets

DatasetsBackboneMFR-ALLIJB-C(1E-4)IJB-C(1E-5)log
WF12M-Flip(40%)r5043.8788.3580.78click me
WF12M-Flip(40%)-PFC-0.1*r5080.2096.1193.79click me
WF12M-Conflictr5079.9395.3091.56click me
WF12M-Conflict-PFC-0.3*r5091.6897.2895.75click me

WF12M means WebFace12M, +PFC-0.1* denotes additional abnormal inter-class filtering.

Speed Benchmark

<div></div>

Arcface-Torch is an efficient tool for training large-scale face recognition training sets. When the number of classes in the training sets exceeds one million, the partial FC sampling strategy maintains the same accuracy while providing several times faster training performance and lower GPU memory utilization. The partial FC is a sparse variant of the model parallel architecture for large-scale face recognition, utilizing a sparse softmax that dynamically samples a subset of class centers for each training batch. During each iteration, only a sparse portion of the parameters are updated, leading to a significant reduction in GPU memory requirements and computational demands. With the partial FC approach, it is possible to train sets with up to 29 million identities, the largest to date. Furthermore, the partial FC method supports multi-machine distributed training and mixed precision training.

More details see speed_benchmark.md in docs.

  1. Training Speed of Various Parallel Techniques (Samples per Second) on a Tesla V100 32GB x 8 System (Higher is Optimal)

- means training failed because of gpu memory limitations.

Number of Identities in DatasetData ParallelModel ParallelPartial FC 0.1
125000468148245004
1400000167230434738
5500000-13893975
8000000--3565
16000000--2679
29000000--1855
  1. GPU Memory Utilization of Various Parallel Techniques (MB per GPU) on a Tesla V100 32GB x 8 System (Lower is Optimal)
Number of Identities in DatasetData ParallelModel ParallelPartial FC 0.1
125000735853064868
140000032252111786056
5500000-321889854
8000000--12310
16000000--19950
29000000--32324

Citations

@inproceedings{deng2019arcface,
  title={Arcface: Additive angular margin loss for deep face recognition},
  author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},
  booktitle={CVPR},
  year={2019}
}
@inproceedings{an2022partialfc,
    author={An, Xiang and Deng, Jiankang and Guo, Jia and Feng, Ziyong and Zhu, XuHan and Yang, Jing and Liu, Tongliang},
    title={Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC},
    booktitle={CVPR},
    year={2022},
}
@inproceedings{zhu2021webface260m,
  title={Webface260m: A benchmark unveiling the power of million-scale deep face recognition},
  author={Zhu, Zheng and Huang, Guan and Deng, Jiankang and Ye, Yun and Huang, Junjie and Chen, Xinze and Zhu, Jiagang and Yang, Tian and Lu, Jiwen and Du, Dalong and Zhou, Jie},
  booktitle={CVPR},
  year={2021}
}

Welcome!

<a href='https://mapmyvisitors.com/web/1bw5e' title='Visit tracker'></a>