Skip to content

Latest commit

 

History

History
199 lines (169 loc) · 15.9 KB

README.md

File metadata and controls

199 lines (169 loc) · 15.9 KB

English | 简体中文

PaddleViT-Classification: Visual Transformer and MLP Models for Image Classification

PaddlePaddle training/validation code and pretrained models for Image Classification.

This implementation is part of PaddleViT project.

Update

  • Update (2022-08-29): Add MobileOne.
  • Update (2022-07-15): Add RepLKNet.
  • Update (2022-05-26): Add ResT and ResTV2.
  • Update (2022-05-16): Add CoaT.
  • Update (2022-05-16): Add ConvNeXt.
  • Update (2022-04-22): Add TopFormer.
  • Update (2021-12-30): Add MobileViT model and multi scale sampler.
  • Update (2021-12-28): Add HvT model.
  • Update (2021-12-24): Add CvT model.
  • Update (2021-12-23): Add BoTNet model.
  • Update (2021-12-15): Add PoolFormer model.
  • Update (2021-12-09): Add HaloNet model.
  • Update (2021-12-08): Add PiT model.
  • Update (2021-12-08): Add XCiT model.
  • Update (2021-11-05): Update ConvMLP models.
  • Update (2021-11-04): Update ConvMixer models.
  • Update (2021-11-03): Update ViP models.
  • Update (2021-10-28): Add MobileViT model.
  • Update (2021-10-28): Add FocalTransformer model.
  • Update (2021-10-28): Add CycleMLP model.
  • Update (2021-10-19): Add BEiT model.
  • Update (2021-10-12): Update code for training from scratch in Swin Transformer.
  • Update (2021-09-28): Add AMP training.
  • Update (2021-09-27): Add more ported model weights.
  • Update (2021-09-09): Add FF-Only, RepMLP models.
  • Update (2021-08-25): Init readme uploaded.

Quick Start

The following links are provided for the code and detail usage of each model architecture:

  1. ViT
  2. DeiT
  3. Swin
  4. VOLO
  5. CSwin
  6. CaiT
  7. PVTv2
  8. Shuffle Transformer
  9. T2T-ViT
  10. CrossViT
  11. Focal Transformer
  12. BEiT
  13. MobileViT
  14. ViP
  15. XCiT
  16. PiT
  17. HaloNet
  18. PoolFormer
  19. BoTNet
  20. CvT
  21. HvT
  22. TopFormer
  23. ConvNeXt
  24. CoaT
  25. MLP-Mixer
  26. ResMLP
  27. gMLP
  28. FF_Only
  29. RepMLP
  30. CycleMLP
  31. ConvMixer
  32. ConvMLP
  33. ResT
  34. ResTV2
  35. RepLKNet
  36. MobileOne

Installation

This module is tested on Python3.6+, and PaddlePaddle 2.1.0+. Most dependencies are installed by PaddlePaddle installation. You only need to install the following packages:

pip install yacs pyyaml

Then download the github repo:

git clone https://github.com/BR-IDL/PaddleViT.git
cd PaddleViT/image_classification

Note: It is recommended to install the latest version of PaddlePaddle to avoid some CUDA errors for PaddleViT training. For PaddlePaddle, please refer to this link for stable version installation and this link for develop version installation.

Basic Usage

Data Preparation

ImageNet2012 dataset is used in the following file structure:

│imagenet/
├──train_list.txt
├──val_list.txt
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......
  • train_list.txt: list of relative paths and labels of training images. You can download it from: google/baidu
  • val_list.txt: list of relative paths and labels of validation images. You can download it from: google/baidu

Demo Example

To use the model with pretrained weights, go to the specific subfolder, then download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

Assume the downloaded weight file is stored in ./vit_base_patch16_224.pdparams, to use the vit_base_patch16_224 model in python:

from config import get_config
from visual_transformer import build_vit as build_model
# config files in ./configs/
config = get_config('./configs/vit_base_patch16_224.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./vit_base_patch16_224.pdparams')
model.set_dict(model_state_dict)

🤖 See the README file in each model folder for detailed usages.

Basic Concepts

PaddleViT image classification module is developed in separate folders for each model with similar structure. Each implementation is around 3 type of classes and 2 types of scripts:

  1. Model classes such as transformer.py, in which the core transformer model and related methods are defined.

  2. Dataset classes such as dataset.py, in which the dataset, dataloader, data transforms are defined. We provided flexible implementations for you to customize the data loading scheme. Both single GPU and multi-GPU loading are supported.

  3. Config classes such as config.py, in which the model and training/validation configurations are defined. Usually, you don't need to change the items in the configuration, we provide updating configs by python arguments or .yaml config file. You can see here for details of our configuration design and usage.

  4. main scripts such as main_single_gpu.py, in which the whole training/validation procedures are defined. The major steps of training or validation are provided, such as logging, loading/saving models, finetuning, etc. Multi-GPU is also supported and implemented in separate python script main_multi_gpu.py.

  5. run scripts such as run_eval_base_224.sh, in which the shell command for running python script with specific configs and arguments are defined.

Model Architectures

PaddleViT now provides the following transfomer based models:

  1. ViT (from Google), released with paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
  2. DeiT (from Facebook and Sorbonne), released with paper Training data-efficient image transformers & distillation through attention, by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
  3. Swin Transformer (from Microsoft), released with paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
  4. VOLO (from Sea AI Lab and NUS), released with paper VOLO: Vision Outlooker for Visual Recognition, by Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, Shuicheng Yan.
  5. CSwin Transformer (from USTC and Microsoft), released with paper CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows , by Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo.
  6. CaiT (from Facebook and Sorbonne), released with paper Going deeper with Image Transformers, by Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, Hervé Jégou.
  7. PVTv2 (from NJU/HKU/NJUST/IIAI/SenseTime), released with paper PVTv2: Improved Baselines with Pyramid Vision Transformer, by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
  8. Shuffle Transformer (from Tencent), released with paper Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer, by Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu.
  9. T2T-ViT (from NUS and YITU), released with paper Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet , by Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan.
  10. CrossViT (from IBM), released with paper CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification, by Chun-Fu Chen, Quanfu Fan, Rameswar Panda.
  11. BEiT (from Microsoft Research), released with paper BEiT: BERT Pre-Training of Image Transformers, by Hangbo Bao, Li Dong, Furu Wei.
  12. Focal Transformer (from Microsoft), released with paper Focal Self-attention for Local-Global Interactions in Vision Transformers, by Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan and Jianfeng Gao.
  13. Mobile-ViT (from Apple), released with paper MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, by Sachin Mehta, Mohammad Rastegari.
  14. ViP (from Oxford/ByteDance), released with Visual Parser: Representing Part-whole Hierarchies with Transformers, by Shuyang Sun, Xiaoyu Yue, Song Bai, Philip Torr.
  15. XCiT (from Facebook/Inria/Sorbonne), released with paper XCiT: Cross-Covariance Image Transformers, by Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, Hervé Jegou.
  16. PiT (from NAVER/Sogan University), released with paper Rethinking Spatial Dimensions of Vision Transformers, by Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh.
  17. HaloNet, (from Google), released with paper Scaling Local Self-Attention for Parameter Efficient Visual Backbones, by Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, Jonathon Shlens.11.
  18. PoolFormer, (from Sea AI Lab/NUS), released with paper MetaFormer is Actually What You Need for Vision, by Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, Shuicheng Yan.
  19. BoTNet, (from UC Berkeley/Google), released with paper Bottleneck Transformers for Visual Recognition, by Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani.
  20. CvT (from McGill/Microsoft), released with paper CvT: Introducing Convolutions to Vision Transformers, by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang
  21. HvT (from Monash University), released with paper Scalable Vision Transformers with Hierarchical Pooling, by Zizheng Pan, Bohan Zhuang, Jing Liu, Haoyu He, Jianfei Cai.
  22. TopFormer (from HUST/Tencent/Fudan/ZJU), released with paper TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, by Wenqiang Zhang, Zilong Huang, Guozhong Luo, Tao Chen, Xinggang Wang, Wenyu Liu, Gang Yu, Chunhua Shen.
  23. ConvNeXt (from FAIR/UCBerkeley), released with paper A ConvNet for the 2020s, by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
  24. CoaT (from UCSD), released with paper Co-Scale Conv-Attentional Image Transformers, by Weijian Xu, Yifan Xu, Tyler Chang, Zhuowen Tu.
  25. ResT (from NJU), released with paper ResT: An Efficient Transformer for Visual Recognition, by Qinglong Zhang, Yubin Yang.
  26. ResTV2 (from NJU), released with paper ResT V2: Simpler, Faster and Stronger, by Qinglong Zhang, Yubin Yang.

PaddleViT now provides the following MLP based models:

  1. MLP-Mixer (from Google), released with paper MLP-Mixer: An all-MLP Architecture for Vision, by Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy
  2. ResMLP (from Facebook/Sorbonne/Inria/Valeo), released with paper ResMLP: Feedforward networks for image classification with data-efficient training, by Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Gautier Izacard, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, Hervé Jégou.
  3. gMLP (from Google), released with paper Pay Attention to MLPs, by Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le.
  4. FF Only (from Oxford), released with paper Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet, by Luke Melas-Kyriazi.
  5. RepMLP (from BNRist/Tsinghua/MEGVII/Aberystwyth), released with paper RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition, by Xiaohan Ding, Chunlong Xia, Xiangyu Zhang, Xiaojie Chu, Jungong Han, Guiguang Ding.
  6. CycleMLP (from HKU/SenseTime), released with paper CycleMLP: A MLP-like Architecture for Dense Prediction, by Shoufa Chen, Enze Xie, Chongjian Ge, Ding Liang, Ping Luo.
  7. ConvMixer (from Anonymous), released with Patches Are All You Need?, by Anonymous.
  8. ConvMLP (from UO/UIUC/PAIR), released with ConvMLP: Hierarchical Convolutional MLPs for Vision, by Jiachen Li, Ali Hassani, Steven Walton, Humphrey Shi.

PaddleViT also provides the following reparameterized models:

  1. RepLKNet (from Tsinghua/MEGVII/Aberystwyth), released with Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs , by Xiaohan Ding, Xiangyu Zhang, Yizhuang Zhou, Jungong Han, Guiguang Ding, Jian Sun.
  2. MobileOne (from Apple), released with An Improved One millisecond Mobile Backbone, by Pavan Kumar Anasosalu Vasu, James Gabriel, Jeff Zhu, Oncel Tuzel, Anurag Ranjan.

Contact

If you have any questions, please create an issue on our Github.