Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inference on multiple GPUS #226

Open
baksh97 opened this issue May 16, 2024 · 1 comment
Open

Running inference on multiple GPUS #226

baksh97 opened this issue May 16, 2024 · 1 comment

Comments

@baksh97
Copy link

baksh97 commented May 16, 2024

I have a multi-GPU machine and want to run DiffDock's inference on all of the GPUs. Is it currently possible?

@jsilter
Copy link
Collaborator

jsilter commented May 16, 2024

At this moment we have nothing built-in to make that easy. The thing to do would be to split your input table up into a few pieces and run each one in a separate process with CUDA_VISIBLE_DEVICES=<gpu to run on>.

@purnawanpp
Copy link

I have same problems, can you explain why this come?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants