You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello community! I’ve trained a custom LLaMA 3.1 16B model with custom tokens from the base model. It works great.
Now, I would like to create a LLaVA from it (a mmproj that I can use in kobold.cpp (based on llama.cpp)). Can you please help me out? How can I do that?
The pseudo-code:
llama_model.load(“my-model”)
llama_model.create_vision(config)
# dataset is like:
# <yuki>What is this?<data>{image_tokens_here}</data></yuki>\n<yuna>It is an Apple.</yuna>\n<yuki>What is this?<data>{image_tokens_here}</data></yuki>\n<yuna>It is a banana.</yuna>
# Note: all <> here are custom tokens!
dataset = “JSONL file”
llama_model.vision.train(dataset)
llama_model.save_projector()
The text was updated successfully, but these errors were encountered:
Hello community! I’ve trained a custom LLaMA 3.1 16B model with custom tokens from the base model. It works great.
Now, I would like to create a LLaVA from it (a mmproj that I can use in kobold.cpp (based on llama.cpp)). Can you please help me out? How can I do that?
The pseudo-code:
The text was updated successfully, but these errors were encountered: