You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to obtain image-text embeddings for "open_clip:xlm-roberta-large-ViT-H-14"; but face the following error. The code works for "open_clip:ViT-H-14". I haven't changed anything but the model. Do you happen to have insights? Thanks!
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/distributor.py", line 17, in __call__
worker(
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/worker.py", line 122, in worker
runner(task)
File "/home/<user>/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/runner.py", line 29, in __call__
reader = self.reader_builder(sampler)
File "/home/<user>miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/clip_inference/worker.py", line 52, in reader_builder
_, preprocess = load_clip(
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 85, in load_clip
model, preprocess = load_clip_without_warmup(clip_model, use_jit, device, clip_cache_path)
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 74, in load_clip_without_warmup
model, preprocess = load_open_clip(clip_model, use_jit, device, clip_cache_path)
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/clip_retrieval/load_clip.py", line 49, in load_open_clip
model, _, preprocess = open_clip.create_model_and_transforms(
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 308, in create_model_and_transforms
model = create_model(
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 228, in create_model
load_checkpoint(model, checkpoint_path)
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/open_clip/factory.py", line 104, in load_checkpoint
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
File "/home/<user>/miniconda3/envs/demo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CustomTextCLIP:
Unexpected key(s) in state_dict: "text.transformer.embeddings.position_ids".
The text was updated successfully, but these errors were encountered:
simran-khanuja
changed the title
Runtime error when running clip-inference "open_clip:xlm-roberta-large-ViT-H-14"
Runtime error when running clip-inference using "open_clip:xlm-roberta-large-ViT-H-14"
Sep 16, 2023
It looks related to mlfoundations/open_clip#594 , one solution that worked for me is downgrading transformers to a lower version where the keys of the model match.
Hi! Thanks for your amazing work!
I am trying to obtain image-text embeddings for "open_clip:xlm-roberta-large-ViT-H-14"; but face the following error. The code works for "open_clip:ViT-H-14". I haven't changed anything but the model. Do you happen to have insights? Thanks!
The text was updated successfully, but these errors were encountered: