Replies: 5 comments 13 replies
-
@neurlang I get the same error all of a sudden after 10k steps with a custom dataset in French with Python 3.8. It has wroked perfectly so far with other datasets also in French.
Did you find a solution ? Another question suggests to downgrade Pytorch but I am not sure it is useful as it worked well with other (smaller) datasets. The only difference is that this one is longer (5k samples of 1 to 12 s long). |
Beta Was this translation helpful? Give feedback.
-
I got similar problem here If I remember correctly this code:
were returning something like "1.12.1+cpu" no matter what was installed, so I force-install bugged version and then downgraded to stable version that I found in SO answers. |
Beta Was this translation helpful? Give feedback.
-
I tried this method with success. Here are the detailed steps I applied (it sounds like voodoo) :
It returned something with CUDA 10.2 which does not support my hardware (Ampere familly card).
As a summary drivers have been updated to version 515, CUDA toolkit 11.7, torch stable (1.12.1+cuda11.6). Now it works as expected. |
Beta Was this translation helpful? Give feedback.
-
It is in general a sign of unstable training mostly caused by bad samples in your dataset. https://tts.readthedocs.io/en/latest/what_makes_a_good_dataset.html |
Beta Was this translation helpful? Give feedback.
-
Same problem on slovak single words dataset. |
Beta Was this translation helpful? Give feedback.
-
On my english dataset I have a problem with rational_quadratic_spline. The input tensor becomes empty.
If I workaround, many losses become nan.
It is a problem with dataset because I've trained other languages fine. What problem it could be?
Beta Was this translation helpful? Give feedback.
All reactions