Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] quantized_resnet_test.py failed on no attribute 'EXPLICIT_PRECISION' #3362

Open
korkland opened this issue Jan 22, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@korkland
Copy link

The example script fx/quantized_resnet_test.py in the Torch-TensorRT repository fails to execute due to the use of a deprecated attribute EXPLICIT_PRECISION in the TensorRT Python API. This attribute is no longer available in recent versions of TensorRT (e.g., TensorRT 10.1).

The error traceback is as follows:

Traceback (most recent call last):
  File "/home/yz9qvs/projects/Torch-TensorRT/examples/fx/quantized_resnet_test.py", line 142, in <module>
    int8_trt = build_int8_trt(rn18)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/yz9qvs/projects/Torch-TensorRT/examples/fx/quantized_resnet_test.py", line 60, in build_int8_trt
    interp = TRTInterpreter(
  File "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/fx/fx2trt.py", line 59, in __init__
    trt.NetworkDefinitionCreationFlag.EXPLICIT_PRECISION
AttributeError: type object 'tensorrt.tensorrt.NetworkDefinitionCreationFlag' has no attribute 'EXPLICIT_PRECISION'

To Reproduce

Steps to reproduce the behavior:

Steps to reproduce the behavior:

  1. Clone the Torch-TensorRT repository.
  2. Navigate to the examples/fx directory.
  3. Run the script quantized_resnet_test.py:
python quantized_resnet_test.py

Expected behavior

The script should run successfully, converting the quantized ResNet model to TensorRT without encountering an error.

Environment

Torch-TensorRT Version: 2.4.0
PyTorch Version: 2.4.0
CPU Architecture: amd64
OS: Ubuntu 22.04
How you installed PyTorch: pip
Build command you used (if compiling from source): N/A
Are you using local sources or building from archives: Building from local sources
Python version: 3.10
CUDA version: 11.8
GPU models and configuration: NVIDIA A40
Any other relevant information: Running TensorRT 10.1.0

Additional context

The issue seems to stem from the use of the deprecated EXPLICIT_PRECISION flag in the TRTInterpreter class within torch_tensorrt/fx/fx2trt.py. TensorRT 10.1 does not support this attribute, and its usage needs to be updated to align with the latest TensorRT API.

This script is one of the very few examples that demonstrates how to quantize a model using FX and lower it to TensorRT. It is a valuable resource for users looking to implement this workflow.

If addressing this issue immediately is not feasible, it would be extremely helpful if an alternative example could be provided to demonstrate how to achieve model quantization and conversion to TensorRT using FX. This would ensure users can still proceed with their workflows while awaiting a permanent fix.
THANKS!

@korkland korkland added the bug Something isn't working label Jan 22, 2025
@korkland korkland changed the title 🐛 [Bug] Encountered bug when using Torch-TensorRT 🐛 [Bug] quantized_resnet_test.py failed on no attribute 'EXPLICIT_PRECISION' Jan 22, 2025
@narendasan
Copy link
Collaborator

Does this example not work for you? https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html.

@korkland
Copy link
Author

Does this example not work for you? https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html.

Thank you for pointing out the vgg16_ptq example! However, this example uses modelopt for post-training quantization, while our workflow specifically relies on torch.fx for quantization and lowering to TensorRT. The quantized_resnet_test.py script appears to be one of the few examples in the repository demonstrating this approach.

Unfortunately, the script does not work as expected due to the issue with the deprecated EXPLICIT_PRECISION flag in TensorRT. This raises a couple of questions:
1. Is the fx-based quantization and TensorRT lowering workflow (as shown in quantized_resnet_test.py) still supported?
2. Has this test been disabled in the CI, or is it no longer being actively maintained?

If this workflow is no longer supported, are there plans to update it, or could you provide an alternative example demonstrating fx-based quantization and integration with TensorRT? This would be immensely helpful for users exploring this specific workflow.

Thank you for your ssupport!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants