You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While fine-tuning LoRA using PyTorch Lightning, I consistently encounter an assert param.grad is not None error during backpropagation. Increasing the gradient accumulation steps delays the issue but doesn't resolve it. I suspect this might be a backend problem. Has anyone else experienced this issue?
The text was updated successfully, but these errors were encountered:
While fine-tuning LoRA using PyTorch Lightning, I consistently encounter an
assert param.grad is not None
error during backpropagation. Increasing the gradient accumulation steps delays the issue but doesn't resolve it. I suspect this might be a backend problem. Has anyone else experienced this issue?The text was updated successfully, but these errors were encountered: