Skip to content

JarodMica/audiobook_maker

Repository files navigation

Audiobook Maker v3

This application utilizes open-source deep-learning text-to-speech and speech-to-speech models to create audiobooks. The main goal of the project is to be able to seamlessly create high-quality audiobooks by using these advancements in machine learning/AI.

It's designed for Windows, but pyside6 should be able to run on linux.

Table of Contents

Install Specific Engines

Features

✔️ Multi-speaker generation, allowing you to change who speaks which sentence etc.

✔️ Audio playback of individually generated sentences, or playback all to listen as it generates

✔️ Stopping during generation to pick up later, continuing to continue where you stopped from

✔️ Bulk sentence regeneration and editting in case you want to regenerate audio for a sentence or change which speaker is being used for a sentence

✔️ Reloading previous audiobooks and exporting audiobooks

✔️ Sentence remapping in case you need to update the original text file that was used for generation

✔️ Integration with popular open-source models like TortoiseTTS, RVC, StyleTTS, F5TTS and XTTS (to be added)

What changed from v1 and v2?

As a user?

The biggest thing would be the ability to use multiple speakers, regenerate in bulk, and to stop during generation. It still fulfills pretty much the same stuff as before.

As a developer?

A lot. Pretty much the entire codebase was rewritten with the sole goal of making it more maintainable and more modular. This can be summarized in two points:

  1. The most important: Completely removed any hardcoded parameters that referenced any TTS or S2S engine (tortoise/rvc)

    This makes it a (relavtive) breeze to add in any new TTS engines or S2S engine. You simply just need to create a configuration for that engine in the configs folder as all widgets in the GUI are created and handled dynamically, define a loading and generation procedure in the s2s or tts engines python file, and it'll work with very little to no issues. I designed it with the intention so that as long as the engine returns an audio_path back to the model.py, it will integrate just fine. I'll be writing documentation on how to do this so that I don't forget in the future, but it might be useful for anyone who want to fork this repo and build on it.

  2. Moved over to MVC

    Point 1 wouldn't be as smooth without this. The previous implementation was heavily coupled together in one, ginormous class and that was getting too cramped and too messy to keep up with. So I moved over to something closer to an MVC framework and separated out the gui into view.py, the "brain" and logic into the controller.py, and all of the functional code into the model.py. Still messy, but not as messy as it would've been if I didn't switch over.

A minor change as well was the migration from pyqt5 --> pyside 6, but that wasn't too big of an issue. Small peculiar issues here and there, but nothing ground breaking.

I have decided to NOT use gradio for this. The biggest reason being that the previous versions were done in pyqt5. Another being my concern for limitations on customizability. I've done a fair share of work in gradio and I don't think that the way I want the audiobook maker to look and feel would be easily achievable by using it. And the last reason being I don't want a web interface or a local web server to be launched (maybe some users would run into issues with this). However, because I'm not using gradio, this also cannot be used on a cloud computer, so you will need all the hardware on your computer locally.

Windows Package Installation

Available for Youtube Channel Members at the Supporter (Package) level: https://www.youtube.com/channel/UCwNdsF7ZXOlrTKhSoGJPnlQ/join or via purchase here: https://buymeacoffee.com/jarodsjourney/extras

Pre-requisites

  1. Download the zip file provided to you on the members community tab.
  2. Unzip the folder
  3. To get StyleTTS, double-click and run finish_styletts_install.bat
  4. Run the start.bat file

And that's it! (maybe)

For F5 TTS, an additional download will be incurred when you first use it due to licensing of the pretrained base model being cc-by-nc-4.0

Manual Installation Windows 10/11

Pre-requistites

GUI Installation

  1. Clone the repository and cd into it.
    git clone https://github.com/JarodMica/audiobook_maker.git
    cd audiobook_maker
    
  2. Create a venv in python 3.11 and then activate it. If you can't activate the python venv due to restricted permissions: https://superuser.com/questions/106360/how-to-enable-execution-of-powershell-scripts
    py -3.11 -m venv venv
    .\venv\Scripts\activate
    
  3. Install basic requirements to get the GUI opening
    pip install -r .\requirements.txt
    
  4. Pull submodules
    git submodule init
    git submodule update
    
  5. Launch the interface
    python .\src\controller.py
    
  6. (Optional) I recommend you create a batch script to launch the gui instead of manually doing it each time. Open notepad, throw the code block below into it, name it start.bat, and it should be fine. Make sure your extensions are showing so that it's not start.bat.txt
    call venv\Scripts\activate
    python src\controller.py
    

Congrats, the GUI can be launched! You should see in the errors in the terminal such as Tortoise not installed or RVC not installed

If you use it like this, you will only be able to use pyttsx3. To install additional engines, refer to the sections below to get the engines you want installed, I recommend you do all of them.

Text-to-Speech Engines

TortoiseTTS Installation

  1. Make sure your venv is still activated, if not, activate it, then pull the repo to update if you are updating an older install:
    .\venv\Scripts\activate
    
  2. Change directory to tortoise submodule, then pull its submodules:
    cd .\modules\tortoise_tts_api\
    git submodule init
    git submodule update
    
  3. Install the submodules:
    pip install modules\tortoise_tts
    pip install modules\dlas
    
  4. Install the tortoise tts api repo, then cd back to root:
    pip install .
    cd ..\..
    
  5. Ensure you have pytorch installed with CUDA enabled Check Torch Install

StyleTTS 2 Installation

  1. Make sure your venv is still activated, if not, activate it, then pull the repo to update if you are updating an older install:

    .\venv\Scripts\activate
    
  2. Change directory to styletts submodule, then pull its submodules:

    cd .\modules\styletts-api\
    git submodule init
    git submodule update
    
  3. Install the submodules:

    pip install modules\StyleTTS2
    
  4. Install the styletts api repo, then cd back to root:

    pip install .
    cd ..\..
    
  5. Install monotonic align with the precompiled wheels that I've built here, put in the repo root, and run the below command. Will NOT work if you wanna use a different version of python:

    pip install monotonic_align-1.2-cp311-cp311-win_amd64.whl
    
    • Alternatively, if you are running a different python version, you will need microsoft c++ build tools to install it yourself: https://visualstudio.microsoft.com/downloads/?q=build+tools
      pip install git+https://github.com/resemble-ai/monotonic_align.git@78b985be210a03d08bc3acc01c4df0442105366f
      
  6. Get eSpeak-NG files and base STTS2 model by running the finish_styletts_install.bat:

    .\finish_styletts_install.bat
    
    • Alternatively, install eSpeak-NG onto your computer. Head over to https://github.com/espeak-ng/espeak-ng/releases and select the espeak-ng-X64.msi the assets dropdown. Download, run, and follow the prompts to set it up on your device. As of this write-up, it'll be at the bottom of 1.51 on the github releases page
      • You will also need to add the following to your envrionment path:
      PHONEMIZER_ESPEAK_LIBRARY="c:\Program Files\eSpeak NG\libespeak-ng.dll"
      PHONEMIZER_ESPEAK_PATH =“c:\Program Files\eSpeak NG”
      
  7. Ensure you have pytorch installed with CUDA enabled Check Torch Install

F5-TTS Installation

  1. Make sure your venv is still activated, if not, activate it, then pull the repo to update if you are updating an older install:
    .\venv\Scripts\activate
    
  2. Install the F5-TTS submodule as a package:
    pip install .\modules\F5-TTS
    
  3. Ensure you have pytorch installed with CUDA enabled Check Torch Install

Speech-to-Speech Engines

RVC Installation

  1. Make sure your venv is still activated, if not, activate it, then pull the repo to update if you are updating an older install:

    .\venv\Scripts\activate
    
  2. Install fairseq as a wheels file. Download it from this link here https://huggingface.co/Jmica/rvc/resolve/main/fairseq-0.12.4-cp311-cp311-win_amd64.whl?download=true and place it in the audiobook_maker :

    pip install .\fairseq-0.12.4-cp311-cp311-win_amd64.whl
    

    It's done this way due to issues with fairseq on python 3.11 and above so I've compiled a wheels file for you to use. You can delete it afterwards if you want.

  3. Install the rvc-python library:

    pip install git+https://github.com/JarodMica/rvc-python
    
  4. Ensure you have pytorch installed with CUDA enabled Check Torch Install

Check Torch Install

Sometimes, torch may be re-installed from other dependencies, so we want to be sure we're on the right version.

Check torch version:

pip show torch

As long as torch Version: 2.4.0+cu121, you should be fine. If not, follow below:

F5-TTS update moves it to 2.4.0 instead of the previously require 2.3.1 (2.4.0 had previously broken whisper but we don't need that here)

pip uninstall torch -y
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121

Torch is a pretty large download, so it may take a bit of time. Once you have it installed here, it should be fine following the other install. However, sometimes, newer versions of torch may uninstall the one we just did, so you may need to uninstall and reinstall after each engine to make sure you have the correction version. After the first install, it will have been cached, so you won't have to wait each time afterwards.

Updating the Package

If there are updates to the Audiobook Maker, you may need to pull new files from the source repo in order to gain access to new functionality.

  1. Open up a terminal in the Audiobook Maker folder (if not openned alread) and run:
    git pull
    git submodule update
    

If you run into issues where you can't pull the updates, you may have made edits to the code base. In this case, you will need to stash your updates so that you can pull it. I won't go over how you can reapply custom mods as that dives into git conflicts etc.

git stash
git pull
git submodule update

Usage

To be written

Acknowledgements

This has been put together using a variety of open-source models and libraries. Wouldn't have been possible without them.

TTS Engines:

S2S Engines:

Licensing

Each engine being used here is MIT or Apache-2.0. However, base-pretrained models may have their own licenses or use limitations so please be aware of that depending on your use case. I am not a lawyer, so I will just state what the licenses are.

StyleTTS 2

The pretrained model states:

Before using these pre-trained models, you agree to inform the listeners that the speech samples are synthesized by the pre-trained models, unless you have the permission to use the voice you synthesize. That is, you agree to only use voices whose speakers grant the permission to have their voice cloned, either directly or by license before making synthesized voices public, or you have to publicly announce that these voices are synthesized if you do not have the permission to use these voices.

F5 TTS

The pretrained base was trained on the Emilia dataset, so it is Non-Commerical CC-By-NC-4.0.