-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❓ [Question] How to decide if an Op should support dynamic shape or not #3224
Comments
If you are finding Core ATen ops which we convert that don't support dynamic shape, please file issues, my impression is that we should cover nearly all of them at this point. cc @apbose @chohk88 |
@narendasan thank you for your explanation, and I think your suggestion totally makes sense to me. BTW, originally I make this question because I am seeing I haven't tried, but I plan to convert is this |
Seems like embedding bag forward only is a new op in core aten. |
To my knowledge, @dynamo_tensorrt_converter(
torch.ops.aten._embedding_bag_forward_only.default,
capability_validator=embedding_bag_validator,
supports_dynamic_shapes=True,
) TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py Lines 290 to 324 in 1820713
|
Thanks for your suggestion @zewenli98 , I remember I have tried with this, and the code failed on some strange shape assertion in I have to comment out the assertion, and let it compile. However, the compilation failed due to some other reasons. So I am switching to the traditional onnx way now. |
@sean-xiang-applovin Thanks for letting us know. It looks like the assertion you pointed out only checks their shapes, not types. If you have runnable code at hand, could you try passing in None to Besides, I'm wondering if you passed in 1-dim |
❓ Question
Since only part of the ops support dynamic shapes, and some are not. What's the criteria to decide if an op supports dynamic shape or not?
For some existing ops, which are not marked as
supports_dynamic_shapes=True
, can I write a converter that wraps the existing converter, and mark my own converter with high priority? Is this the recommended way?or should I just turn on
assume_dynamic_shape_support
, which seems to be a flag globally for all converters ?What you have already tried
Environment
conda
,pip
,libtorch
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: