-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU support #11
Comments
How are the GPUs multiplexed to various applications? For example, if there are two applications/workflows that want to use a GPU and there is only one GPU on the host, can they share the same GPU? Or do you keep track of available GPUs and exclusively assign them to applications? |
Hi Ekin,
if we specifically want to support CUDA and nvidia-docker.
The NVIDIA engineers found a way to share GPU drivers from host to containers, without having them installed on each container individually.
https://github.com/NVIDIA/nvidia-docker
So one GPU can shared between different sandboxes using this framework. This would also happen with e.g. two workflows running in different containers.
BR Klaus
|
I see, thanks! For more information, the following is from the wiki of the nvidia-docker repo.
|
Yes but it seems as if GPU support for knative is still an open issue? knative/client#490
|
I'd think that the GPU support would first be supported only on bare metal environment until that issue is resolved for knative. |
Implementation started on branch feature/GPU_support. |
This issue is subsumed by #87. |
Environment]: first on bare metal with NVIDIA GPUs
[Known affected releases]: master (includes all releases)
allow KNIX functions to use available GPU respources.
The text was updated successfully, but these errors were encountered: