detection/retinaface/README.md
RetinaFace is a practical single-stage SOTA face detector which is initially introduced in arXiv technical report and then accepted by CVPR 2020.
Download our annotations (face bounding boxes & five facial landmarks) from baidu cloud or gdrive
Download the WIDERFACE dataset.
Organise the dataset directory under insightface/RetinaFace/ as follows:
data/retinaface/
train/
images/
label.txt
val/
images/
label.txt
test/
images/
label.txt
make to build cxx tools.Please check train.py for training.
Copy rcnn/sample_config.py to rcnn/config.py
Download ImageNet pretrained models and put them into model/(these models are not for detection testing/inferencing but training and parameters initialization).
ImageNet ResNet50 (baidu cloud and googledrive).
ImageNet ResNet152 (baidu cloud and googledrive).
Start training with CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --prefix ./model/retina --network resnet.
Before training, you can check the resnet network configuration (e.g. pretrained model path, anchor setting and learning rate policy etc..) in rcnn/config.py.
We have two predefined network settings named resnet(for medium and large models) and mnet(for lightweight models).
Please check test.py for testing.
Pretrained Model: RetinaFace-R50 (baidu cloud or googledrive) is a medium size model with ResNet50 backbone. It can output face bounding boxes and five facial landmarks in a single forward pass.
WiderFace validation mAP: Easy 96.5, Medium 95.6, Hard 90.4.
To avoid the confliction with the WiderFace Challenge (ICCV 2019), we postpone the release time of our best model.
yangfly: RetinaFace-MobileNet0.25 (baidu cloud:nzof). WiderFace validation mAP: Hard 82.5. (model size: 1.68Mb)
clancylian: C++ version
RetinaFace in modelscope
@inproceedings{Deng2020CVPR,
title = {RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild},
author = {Deng, Jiankang and Guo, Jia and Ververas, Evangelos and Kotsia, Irene and Zafeiriou, Stefanos},
booktitle = {CVPR},
year = {2020}
}