Back to Mmdetection

YOLOv3

configs/yolo/README.md

3.3.06.7 KB
Original Source

YOLOv3

YOLOv3: An Incremental Improvement

<!-- [ALGORITHM] -->

Abstract

We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster.

<div align=center> </div>

Results and Models

BackboneScaleLr schdMem (GB)Inf time (fps)box APConfigDownload
DarkNet-53320273e2.763.927.9configmodel | log
DarkNet-53416273e3.861.230.9configmodel | log
DarkNet-53608273e7.448.133.7configmodel | log

Mixed Precision Training

We also train YOLOv3 with mixed precision training.

BackboneScaleLr schdMem (GB)Inf time (fps)box APConfigDownload
DarkNet-53608273e4.748.133.8configmodel | log

Lightweight models

BackboneScaleLr schdMem (GB)Inf time (fps)box APConfigDownload
MobileNetV2416300e5.323.9configmodel | log
MobileNetV2320300e3.222.2configmodel | log

Notice: We reduce the number of channels to 96 in both head and neck. It can reduce the flops and parameters, which makes these models more suitable for edge devices.

Credit

This implementation originates from the project of Haoyu Wu(@wuhy08) at Western Digital.

Citation

latex
@misc{redmon2018yolov3,
    title={YOLOv3: An Incremental Improvement},
    author={Joseph Redmon and Ali Farhadi},
    year={2018},
    eprint={1804.02767},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}