Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLO-NAS: Running onnx/tensorrt inference with dynamic image size #2055

Open
janmarczak opened this issue Oct 1, 2024 · 1 comment
Open

Comments

@janmarczak
Copy link

💡 Your Question

Hello,

I was wondering if there is any way to export the YOLO-NAS model to onnx with dynamic image size axes and then convert to TensrRT with dynamic shapes (if needed I can exclude pre and post processing from the onnx)

Basically currently I am using model.predict(img, skip_image_resizing=True) and I would like to have something like this but for onnx/TensorRt.

Versions

No response

@BloodAxe
Copy link
Contributor

BloodAxe commented Oct 1, 2024

The way export currently implemented, exported ONNX graph has static shapes and hence cannot take dynamic input. However it should be possible in theory to export it as dynamic by manually editing the logic of export code and specifying dynamic axis attributes here and there when the model is exported.
One should be really careful with nms sub-graph that is attached to the main model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants