-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bind max steps and lr iterations #67
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I will run it by the ATS. Don't merge yet.
# Set max_epochs or max_steps. Training stops at the first limit reached. | ||
max_epochs: null | ||
max_steps: 150000 | ||
|
||
lr: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having this functionality it's great, so thanks @Rilwan-Adewoyin for implementing it!
just a quick question, what happens when/if they user pass both max_steps and max_epochs? will the code then run until max_epochs is reached and the scheduler used the max_steps?
My two cents here is that probably it could be nice to add some logger info to indicate this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey thanks,
So the default pytorch lightning behaviour is that the code will run until the smallest of max_steps or max_epochs is reached, but the scheduler will be aligned to max_steps.
Yep that sounds like a good idea, so add a logger.info message when the user sets both
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Rilwan-Adewoyin did you have a chance to implement the logger?
Context & Current Setup:
Changes Aim to tie training.lr.iterations to the training length