-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama-2-70b-chat-hf model fails to pass the 125k passkey test #6
Comments
Hi! Thank you for this issue! |
Thanks for your response! I've changed the prompts to
and got answers like
Let me know if I did anything wrong. |
Thank you so much for letting me know! It will definitely help us improve this work. I will try to solve this problem on Monday. |
Hi I've updated the code in |
Thanks for your work. But I get OOM this time.
|
I have adapted the code to transformers 4.37. So please remember to set the In this case: |
It works like a charm! |
Thank you! This issue really helps improve this work. I am also willing to answer your further questions. |
Hi, great work!
I have been conducting passkey tests on several models. The TinyLlama-1.1B-Chat-v1.0(2k) model successfully passed the 20k and, after fine-tuning, the 125k tests with a 60% accuracy rate. However, the Llama-2-70b-chat-hf(4k) model only achieved 40% accuracy in a 50k context and 0% in a 125k context.
I have been using the following script:
The results I've been getting are as follows:
How can I achieve results consistent with those reported in README?
Thank you.
The text was updated successfully, but these errors were encountered: