-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deepspeech pytorch timing gap and torch compile issues #488
Labels
⏰ Timing gap
Significant difference (>= 10%) between pytorch and jax workloads
Comments
priyakasimbeg
changed the title
Deepspeech torch compile
Deepspeech pytorch timing gap and torch compile
Aug 15, 2023
priyakasimbeg
changed the title
Deepspeech pytorch timing gap and torch compile
Deepspeech pytorch timing gap and torch compile issues
Aug 15, 2023
priyakasimbeg
added
the
🚀 Launch Blocker
Issues that are blocking launch of benchmark
label
Aug 17, 2023
Seems like torch compile breaks in Dynamo tracing step on 2.1.0.dev20230820+cu118. Traceback
Full error logs in regression test. Filed separate bug to track specific torch compile issue #498 |
priyakasimbeg
added
the
⏰ Timing gap
Significant difference (>= 10%) between pytorch and jax workloads
label
Aug 22, 2023
Related #483 |
priyakasimbeg
removed
the
🚀 Launch Blocker
Issues that are blocking launch of benchmark
label
Aug 31, 2023
Current status after enabling eager for deepspeech is that pytorch is 12% slower than jax for this workload. |
Also resolved in #597 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Backwards compilation for deepspeech pytorch torch compile unsupported/broken.
Pytorch Deepspeech is currently 13% slower in pytorch compared to jax.
Description
Currently deepspeech works with torch.compile backend option 'eager' but breaks with 'aot_eager'.
Goal of this bug is to:
Steps to reproduce
The text was updated successfully, but these errors were encountered: