Skip to content
WolframRhodium edited this page Jan 20, 2022 · 41 revisions

Welcome to the vs-mlrt wiki!

The goal of the project to provide highly-optimized AI inference runtime for VapourSynth.

Runtimes

  • vs-ov: OpenVINO Pure CPU AI Inference Runtime
  • vs-ort: ONNX Runtime based CPU/CUDA AI Inference Runtime
  • vs-trt: TensorRT based CUDA AI Inference Runtime

Models

The following models are available: