-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed:Protobuf parsing failed #19
Comments
Hello, I have tried the command as listed, it works correctly on my end. Could you provide some details about what version of ONNX you are using, and what operating system you are using? |
I get the same error Linux 23fe32d5f4bf 5.4.0-72-generic #80-Ubuntu CUDA: 12 onnx version: 1.13.0
|
Hello,
I have cloned the repo and also got access for the submodules. Here is the command to I ran
Here is the result I got:
|
@Anindyadeep can you check the file size of the submodule you chose? (It may be surprisingly small for model weights) Try running
might not have downloaded the actual model weights but rather pointers to the files stored on lfs |
@adarshxs Thanks for the quick reply. So here's the thing, I have downloaded (updated the git submodule) for folders And after that I saw that inside
(showing two types of files inside the ONNX folder) and I first thought that those were binary files, but I can open those and I saw these
Although I got the access, but now it feels like, it has't downloaded the files properly when updated all the submodules. |
@Anindyadeep Yes I had the same issue. Make sure you have git lfs installed and run the command
running |
Yes, funny part, while I was writing my issue I also got the root cause. So here are my learnings
And yes that will install everything we need. Thanks @adarshxs for the headstart. |
When I try to run " python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"
" it returns that “onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx failed:Protobuf parsing failed.”
And I try onnx.checker.check_model() it returns that onnx.onnx_cpp2py_export.checker.ValidationError: Unable to parse proto from file: /data/renzhen/Llama-2-Onnx/7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx. Please check if it is a valid protobuf file of proto.
The text was updated successfully, but these errors were encountered: