-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix!: Rework Loss Scalings to provide better modularity #52
base: main
Are you sure you want to change the base?
fix!: Rework Loss Scalings to provide better modularity #52
Conversation
Please consider using the knowledge about variables that come from the dataset metadata. See https://github.com/ecmwf/anemoi-transform/blob/7cbf5f3d4baa37453022a5a97e17cc71a5b8ceeb/src/anemoi/transform/variables/__init__.py#L47 |
We have given this some thought, and after wanting to use the information from the dataset in the beginning, I have opted for allowing the definition of our own groups here to use different scaling for self-defined groups. |
…umstances' of https://github.com/ecmwf/anemoi-core into 7-pressure-level-scalings-only-applied-in-specific-circumstances
Hi, I would like to know what you think about making all scalers explicit in the config file. Something similar to the |
Seems like a good idea! Would you like to add this in this PR? |
I’m not sure what the best approach is. On the one hand, adding more work to this PR would increase its complexity, which might make it more logical to address this refactor in a future PR. On the other hand, this PR already introduces some changes to the configs, and the future PRs would also involve changes to the configs. From this, it might be better to have 1 PR and communicate all the changes to users at once. What do you think? |
I'm happy for it to be included in this PR! Not sure if @sahahner or @mc4117 have other views? |
…umstances' of github.com:ecmwf/anemoi-core into 7-pressure-level-scalings-only-applied-in-specific-circumstances
…umstances' of https://github.com/ecmwf/anemoi-core into 7-pressure-level-scalings-only-applied-in-specific-circumstances
I fully agree with @JPXKQX. I personally think that all config keywords related to the loss and its scaling are a bit scattered in the training. Ideally, I think the config could look something like that:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great effort! The flexible scaling of the loss is really nice.
However, in its current version all config keywords related to the loss and its scaling as well as the code is a bit scattered. Why is variable scaling under training while nodeweights is under losses? In my opinion everything should be under losses. See my comment above.
…name all scalars to scalers. move nan-mask for loss function into training/scaling
for more information, see https://pre-commit.ci
@@ -111,25 +134,24 @@ def __init__( | |||
|
|||
# Kwargs to pass to the loss function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be added to the new structure.
node_weights: | ||
_target_: anemoi.training.losses.nodeweights.GraphNodeAttribute | ||
target_nodes: ${graph.data} | ||
node_attribute: area_weight | ||
scale_dim: 2 # dimension on which scaling applied |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class doesn't have a scale_dim attribute.
It may also be useful to add a general scale by node attribute scaler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet. Refactor is still ongoing.
for more information, see https://pre-commit.ci
* reorder-parameter-names-for-plot * use util function to get variable level from metadata if possible
Solve the problem explained in issue #7 by refactoring the variable scalings into a general variable scaling and a pressure level scaling.
@mc4117 , @pinnstorm and me came up with a new structure. This PR implements this.
Changes by this PR
-> in comparison to before, no changes to the code are necessary to apply additional scalings, but can simply be added to the training-config files
While this PR does introduce a breaking change to the training-config files, it comes with neat new features, that should be of use to many of us.
We did our best to document the changes to the training-config files as best as possible.
Tasklist
📚 Documentation preview 📚: https://anemoi-training--52.org.readthedocs.build/en/52/
📚 Documentation preview 📚: https://anemoi-graphs--52.org.readthedocs.build/en/52/
📚 Documentation preview 📚: https://anemoi-models--52.org.readthedocs.build/en/52/