December 2020
tl;dr: A bag of tricks to train YOLOv3.
This paper and YOLOv4 both starts from YOLOv3 but adopts different methods. YOLOv4 explores extensively recent advances in backbones and data augmentation, while PP-YOLO adopts more training tricks. Their improvements are orthogonal.
The paper is more like a cookbook/recipe, and the focus is how to stack effective tricks that hardly affect efficiency to get better performance.
- Bag of training tricks
- Larger batch
- EMA of weight
- Dropblock (structured dropout) @ FPN
- IoU Loss in separate branch
- IoU Aware: IoU guided NMS
- Grid sensitive: introduced by YOLOv4. This helps the prediction after sigmoid to get to 0 or 1 position exactly, at grid boundary.
- CoordConv
- Matrix-NMS proposed by SOLOv2
- SPP: efficiently boosts receptive field.
- Summary of technical details
- See this review