-
Notifications
You must be signed in to change notification settings - Fork 86
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add CI * readme * Update readme * Fix readme --------- Co-authored-by: Joshua Meyer <[email protected]>
- Loading branch information
1 parent
29c4a06
commit da7af7a
Showing
2 changed files
with
89 additions
and
36 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,70 +1,79 @@ | ||
# XTTS streaming server | ||
|
||
## Running the server | ||
## 1) Run the server | ||
|
||
To run a pre-built container (CUDA 11.8): | ||
### Use a pre-built image | ||
|
||
CUDA 12.1: | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
``` | ||
|
||
CUDA 11.8 (for older cards): | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest | ||
``` | ||
|
||
CUDA 12.1 version (for newer cards) | ||
CPU (not recommended): | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu | ||
``` | ||
|
||
Run with a custom XTTS v2 model (FT or previous versions): | ||
Run with a fine-tuned model: | ||
|
||
Make sure the model folder `/path/to/model/folder` contains the following files: | ||
- `config.json` | ||
- `model.pth` | ||
- `vocab.json` | ||
|
||
```bash | ||
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest` | ||
``` | ||
|
||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). | ||
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
|
||
(Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
### Build the image yourself | ||
|
||
## Testing the server | ||
To build the Docker container Pytorch 2.1 and CUDA 11.8 : | ||
|
||
### Using the gradio demo | ||
`DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile. | ||
|
||
```bash | ||
$ python -m pip install -r test/requirements.txt | ||
$ python demo.py | ||
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git | ||
$ cd xtts-streaming-server/server | ||
$ docker build -t xtts-stream . -f DOCKERFILE | ||
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream | ||
``` | ||
|
||
### Using the test script | ||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
|
||
```bash | ||
$ cd test | ||
$ python -m pip install -r requirements.txt | ||
$ python test_streaming.py | ||
``` | ||
## 2) Testing the running server | ||
|
||
## Building the container | ||
Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal. | ||
1. To build the Docker container Pytorch 2.1 and CUDA 11.8 : | ||
### Clone `xtts-streaming-server` if you haven't already | ||
|
||
```bash | ||
$ cd server | ||
$ docker build -t xtts-stream . | ||
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git | ||
``` | ||
For Pytorch 2.1 and CUDA 12.1 : | ||
```bash | ||
$ cd server | ||
docker build -t xtts-stream . -f Dockerfile.cuda121 | ||
``` | ||
2. Run the server container: | ||
|
||
### Using the gradio demo | ||
|
||
```bash | ||
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream | ||
$ cd xtts-streaming-server | ||
$ python -m pip install -r test/requirements.txt | ||
$ python demo.py | ||
``` | ||
|
||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). | ||
|
||
|
||
Make sure the model folder contains the following files: | ||
- `config.json` | ||
- `model.pth` | ||
- `vocab.json` | ||
### Using the test script | ||
|
||
```bash | ||
$ cd xtts-streaming-server/test | ||
$ python -m pip install -r requirements.txt | ||
$ python test_streaming.py | ||
``` |