You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fabianlim
changed the title
BNB Benchmark Experiments Run Out of Memory with Non-Zero Lora Dropout
Quantized Peft Benchmark Experiments Run Out of Memory with Non-Zero Lora Dropout
Nov 13, 2024
While this issue was originally reported for BNB, we have now seen it also for Quantized Peft in general in #106 . Updating the issue to reflect the general case.
Description
Update: Previously it was reported that the OOM was only for BNB, but now it is observed for Quantized Peft in general even for GPTQ. See #106
Outliers
Previous description below describing issue only for BNB
BNB experiments run out of memory in new benchmarks that set
lora_dropout=0.1
.Compared to AutoGPTQ, we don't notice this issue
There might be a slight overhead in the dropout implementation that causes the experiment to run out of memory for large models
Reproduce Issue
Lora Dropout=0. enters training
Lora Dropout=0.1 runs out of memory
The text was updated successfully, but these errors were encountered: