You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At this moment we have nothing built-in to make that easy. The thing to do would be to split your input table up into a few pieces and run each one in a separate process with CUDA_VISIBLE_DEVICES=<gpu to run on>.
I have a multi-GPU machine and want to run DiffDock's inference on all of the GPUs. Is it currently possible?
The text was updated successfully, but these errors were encountered: