Next-generation TTS model using flow-matching and DiT, inspired by Stable Diffusion 3.
As the first open-source TTS model that tried to combine flow-matching and DiT, StableTTS is a fast and lightweight TTS model for chinese, english and japanese speech generation. It has 31M parameters.
✨ Huggingface demo: 🤗
2024/10: A new autoregressive TTS model is coming soon...
2024/9: 🚀 StableTTS V1.1 Released ⭐ Audio quality is largely improved ⭐
⭐ V1.1 Release Highlights:
- Fixed critical issues that cause the audio quality being much lower than expected. (Mainly in Mel spectrogram and Attention mask)
- Introduced U-Net-like long skip connections to the DiT in the Flow-matching Decoder.
- Use cosine timestep scheduler from Cosyvoice
- Add support for CFG (Classifier-Free Guidance).
- Add support for FireflyGAN vocoder.
- Switched to torchdiffeq for ODE solvers.
- Improved Chinese text frontend (partially based on gpt-sovits2).
- Multilingual support (Chinese, English, Japanese) in a single checkpoint.
- Increased parameters: 10M -> 31M.
Download and place the model in the ./checkpoints
directory, it is ready for inference, finetuning and webui.
Model Name | Task Details | Dataset | Download Link |
---|---|---|---|
StableTTS | text to mel | 600 hours | 🤗 |
Choose a vocoder (vocos
or firefly-gan
) and place it in the ./vocoders/pretrained
directory.
Model Name | Task Details | Dataset | Download Link |
---|---|---|---|
Vocos | mel to wav | 2k hours | 🤗 |
firefly-gan-base | mel to wav | HiFi-16kh | download from fishaudio |
-
Install pytorch: Follow the official PyTorch guide to install pytorch and torchaudio. We recommend the latest version (tested with PyTorch 2.4 and Python 3.12).
-
Install Dependencies: Run the following command to install the required Python packages:
pip install -r requirements.txt
For detailed inference instructions, please refer to inference.ipynb
We also provide a webui based on gradio, please refer to webui.py
StableTTS is designed to be trained easily. We only need text and audio pairs, without any speaker id or extra feature extraction. Here’s how to get started:
-
Generate Text and Audio pairs: Generate the text and audio pair filelist as
./filelists/example.txt
. Some recipes of open-source datasets could be found in./recipes
. -
Run Preprocessing: Adjust the
DataConfig
inpreprocess.py
to set your input and output paths, then run the script. This will process the audio and text according to your list, outputting a JSON file with paths to mel features and phonemes.
Note: Process multilingual data separately by changing the language
setting in DataConfig
-
Adjust Training Configuration: In
config.py
, modifyTrainConfig
to set your file list path and adjust training parameters (such as batch_size) as needed. -
Start the Training Process: Launch
train.py
to start training your model.
Note: For finetuning, download the pretrained model and place it in the model_save_path
directory specified in TrainConfig
. Training script will automatically detect and load the pretrained checkpoint.
The ./vocoder/vocos
folder contains the training and finetuning codes for vocos vocoder.
For other types of vocoders, we recommend to train by using fishaudio vocoder: an uniform interface for developing various vocoders. We use the same spectrogram transform so the vocoders trained is compatible with StableTTS.
-
We use the Diffusion Convolution Transformer block from Hierspeech++, which is a combination of original DiT and FFT(Feed forward Transformer from fastspeech) for better prosody.
-
In flow-matching decoder, we add a FiLM layer before DiT block to condition timestep embedding into model.
The development of our models heavily relies on insights and code from various projects. We express our heartfelt thanks to the creators of the following:
Matcha TTS: Essential flow-matching code.
Grad TTS: Diffusion model structure.
Stable Diffusion 3: Idea of combining flow-matching and DiT.
Vits: Code style and MAS insights, DistributedBucketSampler.
plowtts-pytorch: codes of MAS in training
Bert-VITS2 : numba version of MAS and modern pytorch codes of Vits
fish-speech: dataclass usage and mel-spectrogram transforms using torchaudio, gradio webui
gpt-sovits: melstyle encoder for voice clone
coqui xtts: gradio webui
Chinese Dirtionary Of DiffSinger: Multi-langs_Dictionary and atonyxu's fork
- Release pretrained models.
- Support Japanese language.
- User friendly preprocess and inference script.
- Enhance documentation and citations.
- Release multilingual checkpoint.
Any organization or individual is prohibited from using any technology in this repo to generate or edit someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.