Skip to content

Latest commit

 

History

History
72 lines (48 loc) · 1.72 KB

BUILDING.md

File metadata and controls

72 lines (48 loc) · 1.72 KB

Building

Prerequisites

Cargo | Clang | Cmake

Linux

sudo apt-get update
sudo apt-get install -y pkg-config build-essential clang cmake

Prepare repository

git clone https://github.com/thewh1teagle/sherpa-rs --recursive
cd sherpa-rs

Build

cargo build --release

Instructions (for builds with cuda enabled)

  1. Download CUDA
  2. Download Visual Studio with Desktop C++ and Clang enabled (see clang link below for installer walkthrough)
  3. Run where.exe clang, then setx LIBCLANG_PATH "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\Llvm\x64\bin" or something like that
  4. Restart your shell!!!
  5. Cargo build

Resample wav file for 16khz

ffmpeg -i <file> -ar 16000 -ac 1 -c:a pcm_s16le <out>

Update sherpa-onnx

cd sys/sherpa-onnx
git pull origin master

Gotachas


When running --example with dynamic libraries eg. with directml or cuda you need to have the DLLs from target folder in PATH. Example:

cargo build --features "directml" --example transcribe
copy target\debug\examples\transcribe.exe target\debug
target\debug\transcribe.exe motivation.wav

Currently whisper can transcribe only chunks of 30s max.


When building with cuda you should use cuda 11.x In addition install cudnn with sudo apt install nvidia-cudnn

Debug build

For debug the build process of sherpa-onnx, please set BUILD_DEBUG=1 environment variable before build.