-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ppocr4 paddle转onnx模型后在cpu上的推理速度远大于在gpu的推理速度 #1400
Comments
Number18-tong
changed the title
ppocr2 paddle转onnx模型后在cpu上的推理速度远大于在gpu的推理速度
ppocr4 paddle转onnx模型后在cpu上的推理速度远大于在gpu的推理速度
Sep 27, 2024
遇到了一样的问题,cpu和GPU速度差不多 |
实际测试之后发现,在识别不同图片时会发生十几倍甚至几十倍的速度减慢 |
我也遇到了 使用rust的ort在cpu上2秒,在gpu上10秒。 |
如果你是在win上使用,可以考虑尝试一下使用directml推理,我尝试dml速度是正常的 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
用下面的三个命令生成ppocr4的onnx模型后,用cuda providers推理的速度很慢,用cpu providers推理的速度更快一些,请问是什么原因导致的呢?
paddle2onnx命令:
paddle2onnx --model_dir ch_PP-OCRv4_det_infer --model_filename inference.pdmodel --params_filename inference.pdiparams --save_file ppocr4_det_0926.onnx --opset_version 16 --enable_onnx_checker True
paddle2onnx --model_dir ch_PP-OCRv4_rec_infer --model_filename inference.pdmodel --params_filename inference.pdiparams --save_file ppocr4_rec_0926.onnx --opset_version 16 --enable_onnx_checker True
paddle2onnx --model_dir ch_ppocr_mobile_v2.0_cls_infer --model_filename inference.pdmodel --params_filename inference.pdiparams --save_file ppocr4_cls_0926.onnx --opset_version 16 --enable_onnx_checker True
The text was updated successfully, but these errors were encountered: