You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get the following issue while running the training command in the readme
"python run.py --graph_size 50 --problem mtsp --run_name 'mtsp50' --agent_min 2 --agent_max 10"
assert not torch.isnan(log_p).any()
AssertionError
in _get_log_p method
As far as i know all the log_p are NaNs so it throws the error. Please kindly help to investigate and correct the issue.
Thanks
The text was updated successfully, but these errors were encountered:
I hope you are doing well.
Your solution is really nice and i would like to use it. Can you kindly investigate and correct the above issue so that i can train it by myself?
When I trained the model with the same script, the error didn't occur until after finishing one epoch. However, I suspect the error might be due to numerical instability in the log softmax operation. When log p values are too large, the operation can give nan values. Please try adding clipping before the log softmax function. Let me know if you still experience the same problem.
Hi @Leaveson, @hyeonahkimm,
I get the following issue while running the training command in the readme
"python run.py --graph_size 50 --problem mtsp --run_name 'mtsp50' --agent_min 2 --agent_max 10"
assert not torch.isnan(log_p).any()
AssertionError
in _get_log_p method
As far as i know all the log_p are NaNs so it throws the error. Please kindly help to investigate and correct the issue.
Thanks
The text was updated successfully, but these errors were encountered: