Replies: 1 comment 1 reply
-
Pls update the trainer and try again.
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
when training vocoder model with a command: python3 recipes/ljspeech/hifigan/train_hifigan.py
or runing any other train vocoder scripts,
This error message occurs:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/trainer/trainer.py", line 1686, in fit
self._fit()
File "/usr/local/lib/python3.8/dist-packages/trainer/trainer.py", line 1639, in _fit
self.train_epoch()
File "/usr/local/lib/python3.8/dist-packages/trainer/trainer.py", line 1393, in train_epoch
_, _ = self.train_step(batch, batch_num_steps, cur_step, loader_start_time)
File "/usr/local/lib/python3.8/dist-packages/trainer/trainer.py", line 1239, in train_step
outputs, loss_dict_new, step_time = self.optimize(
File "/usr/local/lib/python3.8/dist-packages/trainer/trainer.py", line 1099, in optimize
loss_dict["loss"].backward()
File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I search a bit and I assume I need to set the argument "requires_grad = True" somewhere but still confusing.
Any idea on this?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions