Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FastChat support for Open_llama_3b_v2 inference - help sought #100

Open
RDouglasSharp opened this issue Mar 17, 2024 · 0 comments
Open

FastChat support for Open_llama_3b_v2 inference - help sought #100

RDouglasSharp opened this issue Mar 17, 2024 · 0 comments

Comments

@RDouglasSharp
Copy link

I use FastChat as the framework for both training and dialog-based inference, and FastChat supports Meta/Llama. I was excited to try the 3B state Open-Llama model, and the FastChat finetuning scripts all work perfectly with open_llama_3b_v2. Oddly, the FastChat inference framework does not work with my finetuned model, or with the original model. Has anyone figured out how to get FastChat fastchat.serve.cli to support openlm-research models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant