Skip to content

v15: latest TensorRT library

Compare
Choose a tag to compare
@github-actions github-actions released this 15 Jun 02:11
· 65 commits to master since this release

General

plugins

  • Added parameter flexible_output_prop for flexible output:

    Traditionally, all plugins can only support onnx models with one or three output channels, due to vapoursynth's limitation.

    By using the new flexible output feature, plugins can support onnx models with arbitrary number of output planes.

    from typing import TypedDict
    
    class Output(TypedDict):
        clip: vs.VideoNode
        num_planes: int
    
    prop = "planes" # arbitrary non-empty string
    output = core.ov.Model(src, network_path, flexible_output_prop=prop) # type: Output
    
    clip = output["clip"]
    num_planes = output["num_planes"]
    
    output_planes = [
        clip.std.PropToClip(prop=f"{prop}{i}")
        for i in range(num_planes)
    ] # type: list[vs.VideoNode]

    This feature is supported by all plugins starting with vs-mlrt v15.

vsmlrt.py

  • Added support for RIFE v4.17 models.

  • Added support for ArtCNN models optimised for anime content. The chroma variants are not supported on previous versions of vs-mlrt, because they require the flexible output feature.

  • Added function flexible_inference for flexible output:

    The above sample is simplified as

    output_planes = flexible_inference(src, network_path) # type: list[vs.VideoNode]

TRT

  • Upgraded to TensorRT 10.1.0.

known issue

  • Accoding to the documentation,

    There is an up to 4x performance regression for networks containing "GridSample" ops compared to TensorRT 9.2.

    This affects RIFE and SAFA models.

    vs-mlrt v14.test3 is the latest one that is not affected.

Community contributions

Full Changelog: v14...v15