You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained a lora adapter by following the toturial in: Llama 3.2 Vision finetuning - Radiography use case.
Why there is difference in the number of trainable parameters when I load the adapter? (I need to load the lora adapter using peft not unsloth)
1- I load the lora_adapter using unsloth:
trainable params: 67,174,400 || all params: 10,737,395,235 || trainable%: 0.6256
code:
I have trained a lora adapter by following the toturial in: Llama 3.2 Vision finetuning - Radiography use case.
Why there is difference in the number of trainable parameters when I load the adapter? (I need to load the lora adapter using peft not unsloth)
1- I load the lora_adapter using unsloth:
trainable params: 67,174,400 || all params: 10,737,395,235 || trainable%: 0.6256
code:
2- I load using PEFT
Base model parameters: 9824213008
LoRA adapter parameters: 9824213008
Combined model parameters: 9824213008
The text was updated successfully, but these errors were encountered: