We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
Does anyone have any guidance on how to load pretrained weights using RoPE for different resolution input image for fine tuning?
It's unclear to me how EVA-02 went for 1024 MIM checkpoints fine tuned at 1536 for o365. I'd personally like to do the inverse.
What are the primary issues to avoid? Is it the changing size for the global attention indexes?
Appreciate any insight people have to offer. Let me know if I can share more information to make the discussion easier.
Thanks
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
Does anyone have any guidance on how to load pretrained weights using RoPE for different resolution input image for fine tuning?
It's unclear to me how EVA-02 went for 1024 MIM checkpoints fine tuned at 1536 for o365. I'd personally like to do the inverse.
What are the primary issues to avoid? Is it the changing size for the global attention indexes?
Appreciate any insight people have to offer. Let me know if I can share more information to make the discussion easier.
Thanks
The text was updated successfully, but these errors were encountered: