-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/improve loss functions #70
base: develop
Are you sure you want to change the base?
Conversation
- Seperate config for loss and metrics
- MAE - RMSE - LogCosh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for tackling these.
I have some doubts about some design decisions, as I have also mentioned in a conversation.
Could you please elaborate why the loss functions now have a dynamic feature index?
I think the metrics could be a nice list, so we can calculate multiple metrics as well.
- Add comment to if dim == 4 - Move to inplace out multiplication
- Is an abstract weighted loss class - Combines feature weighting and node weighting - mse, mae, rmse, logcosh now subclass from WeightedLoss
- Was causing issues with training, due to new shape of data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a partial review on some of the core problems I see with the current implementation. Thank you for already addressing a lot of my previous comments.
I think there are a few main issues from my side regarding the initialization of the losses. It's likely we could just solve this in a quick call rather than asynchronously.
- Also chage latitude to Node
Provides a way to use arbitary scalings |
We now have a downstream application which depends on this branch so it would be nice if it could be merged soonish |
- Allows for multiple losses to be combined with associated weights
- Allows access to underlying functions
* Add ScaleTensor - Allows dynamic setup of scalars - Rework variable_scaler to use it
JesperDramsch is currently absent
…moi-training into feature/improve_loss_functions
- Also fix scalar being incorrectly made
- Info about indexing
- Remove 'include_', and 'add_scalar_' - Add 'scalars' to control scalars - Add docs - Rework tests - Add functional loss function - Remove variable_scaling from kwargs - Require subclass of BaseWeightedLoss
Improve loss functions.
Prior,
WeightedMSELoss
was the hard coded loss for training and metrics.This PR changes the loss function configuration to be fully modular, so that any
nn.Module
can be used.Additionally, this adds the following additonal loss functions into
anemoi-training
Arbitary Scalars
See #96 for more information
📚 Documentation preview 📚: https://anemoi-training--70.org.readthedocs.build/en/70/