Replies: 4 comments 7 replies
-
Did you validate that pytorch ist actually installed with cuda support?
|
Beta Was this translation helpful? Give feedback.
-
I have somewhat the same issue, only my CUDA is correctly installed. Actually, I've even changed the default torch and torchaudio with torch+cuda118, as you can see in this text from my last try: (env_CrearVoz) [path/to/tts]\TTS>pip list ... Torch is running with cuda:
(env_CrearVoz) [path/to/tts]\TTS>set CUDA_VISIBLE_DEVICES="0" (env_CrearVoz) [path/to/tts]\TTS>python .\recipes\ljspeech\vits_tts\train_vits.py --proxy http://[ProxyIP]:[PORT]
At this point, training starts, I guess. I get the stft userwarning on return_complex, but since I'm using Python 3.9 it's not stopping the training, although running on CPUs it's unfeasable slow. The machine is a windows server 2022 (Build 20348.1607), processor: AMD EPYC 7313 16-Core 3GHz with 50GB RAM. |
Beta Was this translation helpful? Give feedback.
-
Following up to save unknown people some time - if you're running TTS via code then there's a boolean hidden when you load preset models. the gpu=True flag was needed to start using hardware acceleration (after CUDA is installed, that is)
|
Beta Was this translation helpful? Give feedback.
-
If anyone stumbles on this issue while using the |
Beta Was this translation helpful? Give feedback.
-
Hi, sorry for noob question.
I installed coqui-tts on my Jetson Xavier and trained the first model with ljspeech dataset just as what coqui tts documentation instructs.However when I run the training script with CUDA_VISIBLE_DEVICES=0 python3 train.py (the script is exactly the same as https://tts.readthedocs.io/en/latest/tutorial_for_nervous_beginners.html), the stdout is showing my training environment is automatically set to CPU. Using --use_cuda true flag didn't help.
Could anyone help me for this?
(coqui-TTS-venv-python3.7) user@pape:/data/TTS$ CUDA_VISIBLE_DEVICES=0 python3 train.py
Beta Was this translation helpful? Give feedback.
All reactions