DeepLabv3, DeepLabv3+ with pretrained models for Cityscapes.
Specify the model architecture with '--model ARCH_NAME' and set the output stride using '--output_stride OUTPUT_STRIDE'.
DeepLabV3 | DeepLabV3+ |
---|---|
deeplabv3_resnet50 | deeplabv3plus_resnet50 |
deeplabv3_resnet101 | deeplabv3plus_resnet101 |
deeplabv3_mobilenet | deeplabv3plus_mobilenet |
deeplabv3_hrnetv2_48 | deeplabv3plus_hrnetv2_48 |
deeplabv3_hrnetv2_32 | deeplabv3plus_hrnetv2_32 |
All pretrained models: Dropbox, Tencent Weiyun
Note: The HRNet backbone was contributed by @timothylimyl. A pre-trained backbone is available at google drive.
model.load_state_dict( torch.load( CKPT_PATH )['model_state'] )
outputs = model(images)
preds = outputs.max(1)[1].detach().cpu().numpy()
colorized_preds = val_dst.decode_target(preds).astype('uint8') # To RGB images, (N, H, W, 3), ranged 0~255, numpy array
# Do whatever you like here with the colorized segmentation maps
colorized_preds = Image.fromarray(colorized_preds[0]) # to PIL Image
Note: pre-trained models in this repo do not use Seperable Conv.
Atrous Separable Convolution is supported in this repo. We provide a simple tool network.convert_to_separable_conv
to convert nn.Conv2d
to AtrousSeparableConvolution
. Please run main.py with '--separable_conv' if it is required. See 'main.py' and 'network/_deeplab.py' for more details.
Single image:
python predict.py --input datasets/data/cityscapes/leftImg8bit/train/bremen/bremen_000000_000019_leftImg8bit.png --dataset cityscapes --model deeplabv3plus_mobilenet --ckpt checkpoints/best_deeplabv3plus_mobilenet_cityscapes_os16.pth --save_val_results_to test_results
Image folder:
python predict.py --input datasets/data/cityscapes/leftImg8bit/train/bremen --dataset cityscapes --model deeplabv3plus_mobilenet --ckpt checkpoints/best_deeplabv3plus_mobilenet_cityscapes_os16.pth --save_val_results_to test_results
Start visdom sever for visualization. Please remove '--enable_vis' if visualization is not needed.
# Run visdom server on port 28333
visdom -port 28333
Run main.py with "--year 2012_aug" to train your model on Pascal VOC2012 Aug. You can also parallel your training on 4 GPUs with '--gpu_id 0,1,2,3'
Note: There is no SyncBN in this repo, so training with multple GPUs and small batch size may degrades the performance. See PyTorch-Encoding for more details about SyncBN
python main.py --model deeplabv3plus_mobilenet --enable_vis --vis_port 28333 --gpu_id 0 --year 2012_aug --crop_val --lr 0.01 --crop_size 513 --batch_size 16 --output_stride 16
Run main.py with '--continue_training' to restore the state_dict of optimizer and scheduler from YOUR_CKPT.
python main.py ... --ckpt YOUR_CKPT --continue_training
Results will be saved at ./results.
python main.py --model deeplabv3plus_mobilenet --enable_vis --vis_port 28333 --gpu_id 0 --year 2012_aug --crop_val --lr 0.01 --crop_size 513 --batch_size 16 --output_stride 16 --ckpt checkpoints/best_deeplabv3plus_mobilenet_voc_os16.pth --test_only --save_val_results
/datasets
/data
/cityscapes
/gtFine
/leftImg8bit
python main.py --model deeplabv3plus_mobilenet --dataset cityscapes --enable_vis --vis_port 28333 --gpu_id 0 --lr 0.1 --crop_size 768 --batch_size 16 --output_stride 16 --data_root ./datasets/data/cityscapes
[1] Rethinking Atrous Convolution for Semantic Image Segmentation
[2] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation