the input size of swin_transformer #2411
Unanswered
shensongli
asked this question in
Q&A
Replies: 1 comment
-
@shensongli There are a few ways to do it, you can do it at model creation time and it will resize the pretrained weights as it creates the model. Or you can do it after creation via .set_input_size() and it adapts the current model state. However, at any one time you can only support one image size, the position embedding sizes change with image and window sizing.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I did the feature extraction trunk with swin_transformer in patchcore, it works when the input is 224, but it gives an error when the input is 512, is there any other way besides resize to 224, I want it to accept the input of 512
Additional context
File "/workspace/dxy_1966/anomalib/src/anomalib/models/pytorchimage/timm/models/swin_transformer_v2_cr.py", line 491, in forward
_assert(H == self.img_size[0], f"Input image height ({H}) doesn't match model ({self.img_size[0]}).")
File "/opt/conda/lib/python3.10/site-packages/torch/init.py", line 1404, in _assert
assert condition, message
AssertionError: Input image height (512) doesn't match model (224).
Epoch 0: 0%| | 0/126 [00:00<?, ?it/s]
Beta Was this translation helpful? Give feedback.
All reactions