Skip to content

Latest commit

 

History

History
223 lines (175 loc) · 7.52 KB

README.md

File metadata and controls

223 lines (175 loc) · 7.52 KB

Volcano vgpu device plugin for Kubernetes

Note:

Volcano vgpu device-plugin can provide device-sharing mechanism for NVIDIA devices managed by volcano.

This is based on Nvidia Device Plugin, it uses HAMi-core to support hard isolation of GPU card.

And collaborate with volcano, it is possible to enable GPU sharing.

Table of Contents

About

The Volcano device plugin for Kubernetes is a Daemonset that allows you to automatically:

  • Expose the number of GPUs on each node of your cluster
  • Keep track of the health of your GPUs
  • Run GPU enabled containers in your Kubernetes cluster.
  • Provide device-sharing mechanism for GPU tasks as the figure below.
  • Enforce hard resource limit in container.

Prerequisites

The list of prerequisites for running the Volcano device plugin is described below:

  • NVIDIA drivers > 440
  • nvidia-docker version > 2.0 (see how to install and it's prerequisites)
  • docker configured with nvidia as the default runtime.
  • Kubernetes version >= 1.16
  • Volcano verison >= 1.19

Quick Start

Preparing your GPU Nodes

The following steps need to be executed on all your GPU nodes. This README assumes that the NVIDIA drivers and nvidia-docker have been installed.

Note that you need to install the nvidia-docker2 package and not the nvidia-container-toolkit. This is because the new --gpus options hasn't reached kubernetes yet. Example:

# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ sudo apt-get update && sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker

You will need to enable the nvidia runtime as your default runtime on your node. We will be editing the docker daemon config file which is usually present at /etc/docker/daemon.json:

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

if runtimes is not already present, head to the install page of nvidia-docker

Configure scheduler

update the scheduler configuration:

kubectl edit cm -n volcano-system volcano-scheduler-configmap

For volcano v1.9+,, use the following configMap

kind: ConfigMap
apiVersion: v1
metadata:
  name: volcano-scheduler-configmap
  namespace: volcano-system
data:
  volcano-scheduler.conf: |
    actions: "enqueue, allocate, backfill"
    tiers:
    - plugins:
      - name: priority
      - name: gang
      - name: conformance
    - plugins:
      - name: drf
      - name: deviceshare
        arguments:
          deviceshare.VGPUEnable: true # enable vgpu
      - name: predicates
      - name: proportion
      - name: nodeorder
      - name: binpack

Enabling GPU Support in Kubernetes

Once you have enabled this option on all the GPU nodes you wish to use, you can then enable GPU support in your cluster by deploying the following Daemonset:

$ kubectl create -f volcano-vgpu-device-plugin.yml

Verify environment is ready

Check the node status, it is ok if volcano.sh/vgpu-number is included in the allocatable resources.

$ kubectl get node {node name} -oyaml
...
status:
  addresses:
  - address: 172.17.0.3
    type: InternalIP
  - address: volcano-control-plane
    type: Hostname
  allocatable:
    cpu: "4"
    ephemeral-storage: 123722704Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 8174332Ki
    pods: "110"
    volcano.sh/vgpu-memory: "89424"
    volcano.sh/vgpu-number: "10"    # vGPU resource
  capacity:
    cpu: "4"
    ephemeral-storage: 123722704Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 8174332Ki
    pods: "110"
    volcano.sh/vgpu-memory: "89424"
    volcano.sh/vgpu-number: "10"   # vGPU resource

Running VGPU Jobs

VGPU can be requested by both set "volcano.sh/vgpu-number" , "volcano.sh/vgpu-cores" and "volcano.sh/vgpu-memory" in resource.limit

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod1
spec:
  schedulerName: volcano
  containers:
    - name: cuda-container
      image: nvidia/cuda:9.0-devel
      command: ["sleep"]
      args: ["100000"]
      resources:
        limits:
          volcano.sh/vgpu-number: 2 # requesting 2 gpu cards
          volcano.sh/vgpu-memory: 3000 # (optinal)each vGPU uses 3G device memory
          volcano.sh/vgpu-cores: 50 # (optional)each vGPU uses 50% core  
EOF

You can validate device memory using nvidia-smi inside container:

img FOSSA Status

WARNING: if you don't request GPUs when using the device plugin with NVIDIA images all the GPUs on the machine will be exposed inside your container. The number of vgpu used by a container can not exceed the number of gpus on that node.

Monitor

volcano-scheduler-metrics records every GPU usage and limitation, visit the following address to get these metrics.

curl {volcano scheduler cluster ip}:8080/metrics

You can also collect the GPU utilization, GPU memory usage, pods' GPU memory limitations and pods' GPU memory usage metrics on nodes by visiting the following addresses:

curl {volcano device plugin pod ip}:9394/metrics

img

Issues and Contributing

Checkout the Contributing document!

Upgrading Kubernetes with the device plugin

Upgrading Kubernetes when you have a device plugin deployed doesn't require you to do any, particular changes to your workflow. The API is versioned and is pretty stable (though it is not guaranteed to be non breaking), upgrading kubernetes won't require you to deploy a different version of the device plugin and you will see GPUs re-registering themselves after you node comes back online.

Upgrading the device plugin is a more complex task. It is recommended to drain GPU tasks as we cannot guarantee that GPU tasks will survive a rolling upgrade. However we make best efforts to preserve GPU tasks during an upgrade.

License

FOSSA Status