You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After Fine-tuning on the CIFAR100 with the model that are self-supervised pretrained and then intermediate fine-tuned on ImageNet-22k, I got the 87.169 AUROC for ID CIFAR100-> OOD CIFAR10, this is a significant difference from the 98.3 AUROC reported in the paper, How can I get normal results?
The command line I ran and the results are shown below:
After Fine-tuning on the CIFAR100 with the model that are self-supervised pretrained and then intermediate fine-tuned on ImageNet-22k, I got the 87.169 AUROC for ID CIFAR100-> OOD CIFAR10, this is a significant difference from the 98.3 AUROC reported in the paper, How can I get normal results?
The command line I ran and the results are shown below:
command line:
OMP_NUM_THREADS=1 python -m torch.distributed.launch --nproc_per_node=2 run_class_finetuning.py --model beit_base_patch16_224 --data_path /home/ubuntu/code/open-set/MOOD --data_set cifar100 --nb_classes 100 --disable_eval_during_finetuning --finetune /home/ubuntu/code/open-set/MOOD/beit_base_patch16_224_pt22k_ft22k.pth --output_dir logs_cifar100_test --batch_size 128 --lr 1.5e-3 --update_freq 1 --warmup_epochs 5 --epochs 90 --layer_decay 0.65 --drop_path 0.2 --weight_decay 0.05 --layer_scale_init_value 0.1 --clip_grad 3.0
results:
The text was updated successfully, but these errors were encountered: