You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Niels here from the open-source team at HF. Congrats on your work! I discovered it here: https://huggingface.co/papers/2406.06040 (feel free to claim the paper so that it appears under your HF profile).
It would be great to make the checkpoints Transformers compatible, by following this guide: https://huggingface.co/docs/transformers/custom_models. Basically this allows people to directly use your models through the Transformers API, along with trust_remote_code=True (the code itself would live on the hub).
would be great to add pipeline_tag: video-text-to-text to the model card's metadata, enabling people to find it easily
Moreover, we encourage people to push model checkpoints to separate model repositories, and also at the root of the repo, such that download stats work for your models (currently the model repo says "downloads aren't tracked").
Let me know if you need any help regarding this.
Cheers,
Niels
The text was updated successfully, but these errors were encountered:
Hi there,
Niels here from the open-source team at HF. Congrats on your work! I discovered it here: https://huggingface.co/papers/2406.06040 (feel free to claim the paper so that it appears under your HF profile).
I see the ST-LLM models are available here: https://huggingface.co/farewellthree/ST_LLM_weight/tree/main, however we've got some suggestions to improve the usability/visibility.
pipeline_tag: video-text-to-text
to the model card's metadata, enabling people to find it easilyLet me know if you need any help regarding this.
Cheers,
Niels
The text was updated successfully, but these errors were encountered: