-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue batching with tensors #14
Comments
Hi @pearsonkyle , Thanks a lot for the report (for next times, please try to put a minimal code to reproduce the error, it took me a while :D ). Anyway, it seems that you are updating the TLE using as elements of the dictionary torch tensors with leaf (in this line --> This triggers an error internally because at some point the dictionary elements are deepcopied with You might want to use floats to update the dictionary of tle data. Try for instance something like --> PS: In any case, I opened #15 because the warning being thrown is currently not helpful for debugging (the actual error is instead related to the deepcopy as I said (something like: |
Closing this as the issue should have been resolved, and now the new version is available both in |
Many thanks for the quick reply @Sceki , I will test it again with the latest version. I had a chance to test the suggestion and while it gets the code to run it doesn't propagate the gradient from the MLP output into the SGP4 model. Unfortunately, the .item strips the value of it's data type which should be a tensor with a gradient property. The loop can run but it doesn't actually regress or learn anything without the gradient |
HI @pearsonkyle .. this is not a bug but a matter of implementation :) I still have not uploaded the tutorial on how to do ML-dSGP4 training, but it will be published soon and it will clarify some of your doubts. In any case, I think perhaps to get some inspiration on how to do this, you can have a look at the tutorial on Gradient Based Optimization (https://esa.github.io/dSGP4/notebooks/gradient_based_optimization.html), and the code (e.g. https://github.com/esa/dSGP4/blob/v1.0.2/dsgp4/newton_method.py#L113). I hope these help for now to get started and to have a way to keep track of the gradients while updating the TLE and propagating. |
Hello,
I'm running into some issues when trying to use this model with batches of TLEs. Below is a simplified implementation of the code giving me issues.
The line that throws an error is:
_, tle_batch = dsgp4.initialize_tle(tles, with_grad=True)
with the message,Warning: 1 TLEs were not initialized because they are in deep space. Deep space propagation is currently not supported.
even though I'm able to evaluate it prior to the MLP. Does it have something to do with the input tensors having gradients? Even though the TLE is staying constant, I'm trying to have the network predict the value at each time step in a very simple example.The text was updated successfully, but these errors were encountered: