Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does ReID work out of the box with commands given in demo section of readme? #111

Open
RajS999 opened this issue Aug 7, 2024 · 2 comments

Comments

@RajS999
Copy link

RajS999 commented Aug 7, 2024

I tried following commands to try out ReID on my custom 15 seconds video with three people walking, getting occluded behind two trees and emerge back from behind tree.

Single class yolox:

 python3 tools/demo.py video \
 --path <my-video-path> \
 -f yolox/exps/example/mot/yolox_x_mix_det.py \
 -c pretrained/bytetrack_x_mot17.pth.tar \
 --with-reid --fuse-score --fp16 --fuse --save_result

Multiclass with yolox:

 python3 tools/mc_demo.py video \
 --path <my-video-path> \
 -f yolox/exps/example/mot/yolox_x_mix_det.py \
 -c pretrained/bytetrack_x_mot17.pth.tar \
 --with-reid --fuse-score --fp16 --fuse --save_result

Multiclass with yolov7:

python3 tools/mc_demo_yolov7.py 
--weights pretrained/yolov7.pt 
--source <my-video-path>
--fuse-score --agnostic-nms --with-reid

However, none work. When people emerge back from behind the trees after seven of seconds of walk, they are not assigned the older IDs but get assigned newer IDs.

Does ReID indeed work? Are above commands correct? or am doing something wrong here?

@RajS999
Copy link
Author

RajS999 commented Aug 8, 2024

I also tried with options --track_buffer 500 --appearance_thresh 0.05 --proximity_thresh 0, but no luck:

python3 tools/mc_demo_yolov7.py 
--weights pretrained/yolov7.pt 
--source <my-video-path>
--fuse-score --agnostic-nms --with-reid
--track_buffer 500 --appearance_thresh 0.05 --proximity_thresh 0

and also just --track_buffer 500, without any luck:

python3 tools/mc_demo_yolov7.py 
--weights pretrained/yolov7.pt 
--source <my-video-path>
--fuse-score --agnostic-nms --with-reid
--track_buffer 500

@RajS999
Copy link
Author

RajS999 commented Aug 8, 2024

Some observations:

In model zoo section, the repo says:

  • We used the publicly available ByteTrack model zoo trained on MOT17, MOT20 and ablation study for YOLOX object detection.
  • Ours trained ReID models can be downloaded from MOT17-SBS-S50, MOT20-SBS-S50.
  • For multi-class MOT use YOLOX or YOLOv7 trained on COCO (or any custom weights).

I believe it means, for single class MOT, models from publicly available ByteTrack will NOT work for ReID and we should use pretrained ReID models instead (stated in second bullet point above).

But the demo command use publicly available ByteTrack model bytetrack_x_mot17.pth.tar :

# Original example
python3 tools/demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result

# Multi-class example
python3 tools/mc_demo.py video --path <path_to_video> -f yolox/exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --with-reid --fuse-score --fp16 --fuse --save_result

I tried using pretrained models MOT17-SBS-S50, MOT20-SBS-S50 in above commands, but they seem incompatible.

It gives error:

2024-08-08 16:21:05.378 | INFO     | __main__:main:325 - loading checkpoint
Traceback (most recent call last):
  File "tools/demo.py", line 368, in <module>
    main(exp, args)
  File "tools/demo.py", line 328, in main
    model.load_state_dict(ckpt["model"])
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for YOLOX:
        Missing key(s) in state_dict: "backbone.backbone.stem.conv.conv.weight", "backbone.backbone.stem.conv.bn.weight", "backbone.backbone.stem.conv.bn.bias", "backbone.backbone.stem.conv.bn.running_mean"

Also, third bullet point says, "For multi-class MOT use YOLOX or YOLOv7 trained on COCO (or any custom weights)." But it did not give any pretrained models for multi-class MOT. Should publicly available YOLOX or YOLOv7 model work for multi class MOT? As I specified in commands in original question, publicly available yolov7 did not work for me either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant