You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your interest in our work. Currently, I have generalized Time-LLM into a universal reprogramming alignment framework capable of adapting to any large-scale model. The selection of llama-7b at the time was primarily due to its exceptional performance, making it an ideal choice to serve as a preliminary demonstration. Researchers are encouraged to explore and experiment with other state-of-the-art large models as well.
There are many large models, why did you choose Llama?
The text was updated successfully, but these errors were encountered: