-
Notifications
You must be signed in to change notification settings - Fork 20
vstrt
The vs-tensorrt plugin provides optimized CUDA runtime for some popular AI filters.
Prototype: core.trt.Model(clip[] clips, string engine_path[, int[] overlap, int[] tilesize, int device_id=0, bint use_cuda_graph=False, int num_streams=1, int verbosity=2])
Arguments:
-
clip[] clips
: the input clips, only 32-bit floating point RGB or GRAY clips are supported. For model specific input requirements, please consult our wiki. -
string engine_path
: the path to the prebuilt engine (see below) -
int[] overlap
: some networks (e.g. CNN) support arbitrary input shape where other networks might only support fixed input shape and the input clip must be processed in tiles. Theoverlap
argument specifies the overlapping (horizontal and vertical, or both, in pixels) between adjacent tiles to minimize boundary issues. Please refer to network specific docs on the recommended overlapping size. -
int[] tilesize
: Even for CNN where arbitrary input sizes could be supported, sometimes the network does not work well for the entire range of input dimensions, and you have to limit the size of each tile. This parameter specify the tile size (horizontal and vertical, or both, including the overlapping). Please refer to network specific docs on the recommended tile size. -
int device_id
: Specifies the GPU device id to use, default 0. Requires Nvidia GPUs with second-generation Kepler architecture onwards. -
int num_streams
: number of concurrent CUDA streams to use. Default 1. Increase if GPU not saturated. -
verbosity
: The verbosity level of TensorRT runtime. The message writes tostderr
.0
: Internal error.1
: Application error.2
: Warning.3
: Informational messages with instructional information.4
: Verbose messages with debugging information.
When overlap
and tilesize
are not specified, the filter will internally try to resize the network to fit the input clips. This might not always work (for example, the network might require the width to be divisible by 8), and the filter will error out in this case.
The general rule is to either:
- left out
overlap
,tilesize
at all and just process the input frame in one tile, or - set all three so that the frame is processed in
tilesize[0]
xtilesize[1]
tiles, and adjacent tiles will have an overlap ofoverlap[0]
xoverlap[1]
pixels on each direction. The overlapped region will be throw out so that only internal output pixels are used.
- Requires models with built-in dynamic shape support, e.g.
waifu2x_v3.7z
anddpir_v3.7z
.
-
Build engine
trtexec --onnx=drunet_gray.onnx --minShapes=input:1x1x0x0 --optShapes=input:1x1x64x64 --maxShapes=input:1x1x1080x1920 --saveEngine=dpir_gray_1080p_dynamic.engine --tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT
The engine will be optimized for
64x64
input and can be applied to eligible inputs with shape from0x0
to1920x1080
by specifying parametersblock_w
andblock_h
in thetrt
plugin.Also check trtexec useful arguments
In vpy script:
# DPIR
src = core.std.BlankClip(src, width=640, height=360, format=vs.GRAYS)
sigma = 10.0
flt = core.trt.Model([src, core.std.BlankClip(src, color=sigma/255.0)], engine_path="dpir_gray_640_360.engine", block_w=640, block_h=360)
-
--workspace=N
: Set workspace size in megabytes (default = 16) -
--fp16
: Enable fp16 precision, in addition to fp32 (default = disabled) -
--noTF32
: Disable tf32 precision (default is to enable tf32, in addition to fp32, Ampere only) -
--device=N
: Select cuda device N (default = 0) -
--timingCacheFile=<file>
: Save/load the serialized global timing cache -
--buildOnly
:Skip inference perf measurement (default = disabled) -
--verbose
: Use verbose logging (default = false) -
--profilingVerbosity=mode
: Specify profiling verbosity.mode ::= layer_names_only|detailed|none
(default = layer_names_only)
-
--tacticSources=tactics
: Specify the tactics to be used by adding (+) or removing (-) tactics from the defaulttactic sources (default = all available tactics).
Note: Currently only cuDNN, cuBLAS and cuBLAS-LT are listed as optional tactics.
Tactic Sources:
tactics ::= [","tactic] tactic ::= (+|-)lib lib ::= "CUBLAS"|"CUBLAS_LT"|"CUDNN"
For example, to disable cudnn and enable cublas: --tacticSources=-CUDNN,+CUBLAS
-
--useCudaGraph
: Use CUDA graph to capture engine execution and then launch inference (default = disabled). This flag may be ignored if the graph capture fails. -
--noDataTransfers
: Disable DMA transfers to and from device (default = enabled). -
--saveEngine=<file>
: Save the serialized engine -
--loadEngine=<file>
: Load a serialized engine
- Runtimes
- Models
- Device-specific benchmarks
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 3090
- NVIDIA GeForce RTX 2080 Ti
- NVIDIA Quadro P6000
- AMD Radeon RX 7900 XTX
- AMD Radeon Pro V620
- AMD Radeon Pro V520
- AMD Radeon VII
- AMD EPYC Zen4
- Intel Core Ultra 7 155H
- Intel Arc A380
- Intel Arc A770
- Intel Data Center GPU Flex 170
- Intel Data Center GPU Max 1100
- Intel Xeon Sapphire Rapids