Skip to content

Latest commit

 

History

History
137 lines (118 loc) · 6.78 KB

File metadata and controls

137 lines (118 loc) · 6.78 KB

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, arxiv

PaddlePaddle training/validation code and pretrained models for Swin Transformer.

The official pytorch implementation is here.

This implementation is developed by PaddleViT.

drawing

Swin Transformer Model Overview

Update

  • Update (2022-03-16): Code is refactored.
  • Update (2021-10-11): New main function for single and multiplt gpus are updated.
  • Update (2021-10-11): Training from scratch is available.
  • Update (2021-09-27): Model FLOPs and num params are uploaded.
  • Update (2021-09-10): More ported weights are uploaded.
  • Update (2021-08-11): Code is released and ported weights are uploaded.

Models Zoo

Model Acc@1 Acc@5 #Params FLOPs Image Size Crop_pct Interpolation Link
swin_t_224 81.37 95.54 28.3M 4.4G 224 0.9 bicubic google/baidu
swin_s_224 83.21 96.32 49.6M 8.6G 224 0.9 bicubic google/baidu
swin_b_224 83.60 96.46 87.7M 15.3G 224 0.9 bicubic google/baidu
swin_b_384 84.48 96.89 87.7M 45.5G 384 1.0 bicubic google/baidu
swin_b_224_22kto1k 85.27 97.56 87.7M 15.3G 224 0.9 bicubic google/baidu
swin_b_384_22kto1k 86.43 98.07 87.7M 45.5G 384 1.0 bicubic google/baidu
swin_l_224_22kto1k 86.32 97.90 196.4M 34.3G 224 0.9 bicubic google/baidu
swin_l_384_22kto1k 87.14 98.23 196.4M 100.9G 384 1.0 bicubic google/baidu

*The results are evaluated on ImageNet2012 validation set.

Data Preparation

ImageNet2012 dataset is used in the following file structure:

│imagenet/
├──train_list.txt
├──val_list.txt
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......
  • train_list.txt: list of relative paths and labels of training images. You can download it from: google/baidu
  • val_list.txt: list of relative paths and labels of validation images. You can download it from: google/baidu

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume weight file is downloaded in ./swin_tiny_patch4_window7_224.pdparams, to use the swin_tiny_patch4_window7_224 model in python:

from config import get_config
from swin import build_swin as build_model
# config files in ./configs/
config = get_config('./configs/swin_tiny_patch4_window7_224.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./swin_tiny_patch4_window7_224.pdparams')
model.set_state_dict(model_state_dict)

Evaluation

To evaluate Swin model performance on ImageNet2012, run the following script using command line:

sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/swin_tiny_patch4_window7_224.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-eval \
-pretrained='./swin_tiny_patch4_window7_224.pdparams' \
-amp

Note: if you have only 1 GPU, change device number to CUDA_VISIBLE_DEVICES=0 would run the evaluation on single GPU.

Training

To train the Swin model on ImageNet2012, run the following script using command line:

sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/swin_tiny_patch4_window7_224.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-amp

Note: it is highly recommanded to run the training using multiple GPUs / multi-node GPUs.

Finetuning

To finetune the Swin model on ImageNet2012, run the following script using command line:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/swin_base_patch4_window12_384.yaml' \
-dataset='imagenet2012' \
-batch_size=16 \
-data_path='/dataset/imagenet' \
-pretrained='./swin_base_patch4_window7_224.pdparams' \
-amp

Note: use -pretrained argument to set the pretrained model path, you may also need to modify the hyperparams defined in config file.

Reference

@article{liu2021swin,
  title={Swin transformer: Hierarchical vision transformer using shifted windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  journal={arXiv preprint arXiv:2103.14030},
  year={2021}
}