sudo apt-get update
sudo apt-get install -y pkg-config build-essential clang cmake
git clone https://github.com/thewh1teagle/sherpa-rs --recursive
cd sherpa-rs
cargo build --release
- Download CUDA
- Download Visual Studio with Desktop C++ and Clang enabled (see clang link below for installer walkthrough)
- Run
where.exe clang
, thensetx LIBCLANG_PATH "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\Llvm\x64\bin"
or something like that - Restart your shell!!!
- Cargo build
ffmpeg -i <file> -ar 16000 -ac 1 -c:a pcm_s16le <out>
cd sys/sherpa-onnx
git pull origin master
When running --example
with dynamic libraries eg. with directml
or cuda
you need to have the DLLs from target
folder in PATH.
Example:
cargo build --features "directml" --example transcribe
copy target\debug\examples\transcribe.exe target\debug
target\debug\transcribe.exe motivation.wav
Currently whisper can transcribe only chunks of 30s max.
When building with cuda you should use cuda 11.x
In addition install cudnn
with sudo apt install nvidia-cudnn
For debug the build process of sherpa-onnx, please set BUILD_DEBUG=1
environment variable before build.