Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Llama-7B? #160

Open
mxiwiwn opened this issue Oct 31, 2024 · 1 comment
Open

Why Llama-7B? #160

mxiwiwn opened this issue Oct 31, 2024 · 1 comment

Comments

@mxiwiwn
Copy link

mxiwiwn commented Oct 31, 2024

There are many large models, why did you choose Llama?

@kwuking
Copy link
Collaborator

kwuking commented Jan 6, 2025

Thank you very much for your interest in our work. Currently, I have generalized Time-LLM into a universal reprogramming alignment framework capable of adapting to any large-scale model. The selection of llama-7b at the time was primarily due to its exceptional performance, making it an ideal choice to serve as a preliminary demonstration. Researchers are encouraged to explore and experiment with other state-of-the-art large models as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants