-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenLLaMA can quickly learn how to code #65
Comments
Great news, thanks for sharing! |
It would be interesting if a LoRA could be trained so that one could just apply the LoRA without needing to fine-tune the model. That LoRA may also be able to be applied to other OpenLLaMA-derived models. |
Check out our OpenLLaMA v2 model, which is pretrained with a lot of code. The official release of that will happen very soon. |
@jorgemcgomes Oh kinda forgot to reply here! @young-geng Congrats on the new release of v2! Trying it out right now :) Can see both the multiple spaces issue is fixed AND the fast tokenizer is fixed in the Huggingface base repo! (the thermal example you provided) Good work! |
I know it's mentioned in the readme of the repo that this model apparently can't code because of the spaces that are merged. And this has been discussed in #40 .
However, I did some fine-tuning on the 3B model using the "fixed" tokenizer by @danielhanchen https://huggingface.co/danielhanchen/open_llama_3b and with
use_fast=True
. This tokenizer encodes multiple spaces as multiple space tokens, it doesn't get rid of them as the "official" tokenizer.My fine-tuning dataset includes very little code, as I wasn't really trying to do that. It's just a small part of the instructions in the instruct datasets I used. But then I noticed this in one output of the model. Lo and behold, perfectly indented python code.
A lot of people out there simply repeating that OpenLLaMA is useless for code, but that doesn't seem to be the case provided the tokenizer configuration is fixed, and a little bit of fine-tuning is done.
The text was updated successfully, but these errors were encountered: