You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our codes are based on PyTorch. The additional dependencies can be found requirements.txt. You can create an environment as conda create --name vssl-ood --file requirements.txt
Pretrained model weights
The VSSL pretraining weights can be downloaded from this link.
Methods
Weights
v-SimCLR
VideoSimCLR_kinetics400.pth.tar
v-MOCO
VideoMOCOv3_kinetics400.pth.tar
v-BYOL
VideoBYOL_kinetics400.pth.tar
v-SimSiam
VideoSimSiam_kinetics400.pth.tar
v-DINO
VideoDINO_kinetics400.pth.tar
v-MAE
VideoMAE_kinetics400.pth.tar
v-Supervised
VideoSupervised_kinetics400.pth.tar
Dataset
Following, we list the sources of the datasets used in this work. We follow standard/official instructions for usage, please see the documentation of the respective datasets for details.
Once the datasets are downloaded please update the paths in tools/paths.py.
Please download the cache.zip for eval. datasets from this link and unzip inside datasets/.
bash launch.sh linear-ood-cego byol_train3rd_test1st.yaml charadesego VideoBYOL_kinetics400.pth.tar
bash launch.sh linear-ood-mit-tiny byol_mit_tiny_v2.yaml mitv2 VideoBYOL_kinetics400.pth.tar
bash launch.sh linear-ood byol_k700_actor_shift.yaml kinetics400 VideoBYOL_kinetics400.pth.tar
# note linear-ood-mit-tiny: originally I extracted fixed features separately and ran fc tuning, this strategy saves compute time. But, to be consistent with other evaluation scripts this code loads the videos and do fc tuning in an usual training loop. A slight performance difference can be expected and I expect the current setup will result better performance.