Replies: 1 comment 3 replies
-
XMem is class-agnostic and it does not know what to track (there are many possible segmentations of the same scene) unless told. Are you dealing with specific object categories? |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to use XMem on my own data. I got the GUI (
interactive_demo.py
) to work and it works very well for the detection task I need it to work on, i.e. clicking on the object I want to detect and forward propagation creates the right masks and visualizations.However, now I am a bit lost: I want XMem to run over all my videos, spitting out masks / annotations for the object I need to detect in each frame. Do I need to initialize every video that I want to analyze with an annotation (i.e. the first frame(s) need to be annotated so it can the forward propagate through all of it) or is there a way to do this automatically?
Beta Was this translation helpful? Give feedback.
All reactions