Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative Error Functions for Loss Calculation #176

Open
kbushick opened this issue Apr 5, 2024 · 1 comment
Open

Alternative Error Functions for Loss Calculation #176

kbushick opened this issue Apr 5, 2024 · 1 comment

Comments

@kbushick
Copy link

kbushick commented Apr 5, 2024

Is your feature request related to a problem? Please describe.
It seems that KliFF natively supports only RMSE loss type functions. While users can define their own residual functions, the loss is always squared (lines 528, 575, 851 in loss.py).
loss = 0.5 * np.linalg.norm(residual) ** 2
loss = torch.sum(torch.pow(residual, 2))

Describe the solution you'd like
It would be helpful to be able to specify the loss function that should be used, in combination with the residual, i.e. MAE, RMSE, etc. as a flag in the Loss constructor. Since different functions may be better or worse depending on the specific use case, there could be utility in allowing a simple switch in the loss computation.

Describe alternatives you've considered
Knowing that the square is always there, an alternative approach would be crafting a custom residual function to compensate, for example taking the square root of the residual. I believe this should be a feasible short term solution, but may make code harder to follow.

@mjwen
Copy link
Collaborator

mjwen commented May 9, 2024

Hi @kbushick, Sorry for the late response!

We are making major updates to KLIFF, and a new version which supports flexible loss functions among others is coming soon. Stay tuned!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants