This is the code for our CVPR 2020 Oral paper Towards Efficient Model Compression via Learned Global Ranking. This work improves upon our pre-print Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks, in both further understanding and empirical results. A 4-page abridged version of the pre-print was accepeted as contributed talk at NeurIPS'18 MLPCD2 Workshop.
- PyTorch 1.0.1
- Python 3.5+
The scripts for reproducing the results in Table 1 and Figure 2 are under scripts/
Within each script, there are several commands that run the experiments
P.S. For MorphNet, we search for the trade-off lambda instead of use a large lambda and grow because we find that the growing phase leads to worse results, which is also observed by Wang et al. in their CVPR work Growing a brain: Fine-tuning by increasing model capacity
We provide a script to extract the progress (in architectures explored) when learning the affine transformation. For any LeGR script you run, pass the generated output for searching the affine transformation to the following script will generate a visualization of the search progress
For example:
python utils/plot_search_progress.py log/resnet56_cifar10_flops0.47_transformations_1_output.log resnet56_cifar10_flops0.47_transformations_1.mp4
The video will be generated at ./resnet56_cifar10_flops0.47_transformations_1.mp4
If you find this repository helpful, please consider citing our work
@inproceedings{chin2020legr,
title={Towards Efficient Model Compression via Learned Global Ranking},
author={Chin, Ting-Wu and Ding, Ruizhou and Zhang, Cha and Marculescu, Diana},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2020}
}
@article{chin2018lcp,
title={Layer-compensated pruning for resource-constrained convolutional neural networks},
author={Chin, Ting-Wu and Zhang, Cha and Marculescu, Diana},
journal={arXiv preprint arXiv:1810.00518},
year={2018}
}