Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fast Deploy Direct & Linked (Experimental)
This patch adds support for the Fast Deploy Direct and Linked features, i.e. the ability to cache images per-datastore and quickly provision a VM from these caches, either directly or as a linked clone. This is an experimental feature that must be enabled manually. There are many things about this feature that may change prior to it being ready for production. The patch notes below are broken down into several sections: * **Goals** -- What is currently supported * **Non-goals** -- What is not on the table right now * **Architecture** * **Activation** -- How to enable this experimental feature * **Placement** -- Request datastore recommendations * **Image cache** -- A general-purpose VM image cache * **Create VM** -- Create directly from cached disk The following goals are what is considered in-scope for this experimental feature at this time. Just because something is not listed, it does not mean it will not be added before the feature is made generally available: * Support all VM images that are OVFs * Support multiple zones * Support workload-domain isolation * Support all datastore types, including host-local and vSAN * Support for configuring a default fast-deploy mode * Support picking the fast-deploy mode per VM (direct, linked) * Support disabling fast-deploy per VM * Support VM encryption for VMs deployed with fast deploy direct * Support backup/restore for VMs deployed with fast deploy direct * Support site replication for VMs deployed with fast deploy direct * Support datastore maintenance/migration for VMs deployed with fast deploy direct The following is a list of non-goals that are not in scope at this time, although most of them should be revisited prior to this feature graduating to production: * Support VM images that are VM templates (VMTX) The architecture behind Fast Deploy makes it trivial to support deploying VM images that point to VM templates. While not in scope at this time, it is likely this becomes part of the feature prior to it graduating to production-ready. The architecture is broken down into the following sections: * **Activation** -- How to enable this experimental feature * **Placement** -- Request datastore recommendations * **Image cache** -- A general-purpose VM image cache * **Create VM** -- Create directly from cached disk Enabling the experimental Fast Deploy feature requires setting the environment variable `FSS_WCP_VMSERVICE_FAST_DEPLOY` to `true` in the VM Operator deployment. The environment variable `FAST_DEPLOY_MODE` may be set to one of the following values to configure the default mode for the fast-deploy feature: * `direct` -- VMs are deployed using cached disks * `linked` -- VMs are deployed as a linked clone * the value is empty -- `direct` mode is used * the value is anything else -- fast deploy is disabled It is possible to override the default mode per-VM by setting the annotation `vmoperator.vmware.com/fast-deploy`. The values of this annotation follow the same rules described above. Please note, setting the environment variable `FAST_DEPLOY_MODE` or the annotation `vmoperator.vmware.com/fast-deploy` has no effect if the feature is not enabled. Please refer to PR #823 for information on placement as the logic from that change has stayed the same in this one. The way the images/disks are cached has completely changed since PR * not visible to DevOps users * a namespace-scoped resource that only exists in the same namespace as the VM Operator controller pod * used to cache the OVF and an image's disks A `VirtualMachineImageCache` resource is created per unique library item resource. That means even if there are 20,000 VMI resources spread across a multitude of namespaces or at the cluster scope, if they all point to the same underlying library item, then for all those VMI resources there will be a single `VirtualMachineImageCache` resource in the VM Operator namespace. The `VirtualMachineImageCache` controller caches the OVF for the image in a `ConfigMap` resource in the VM Operator namespace. This completely obviates the need to maintain a bespoke, in-memory OVF cache. The `VirtualMachineImageCache` resource caches the image's disks on specified datastores by setting `spec.locations` with entries that map to unique datacenter/datastore IDs. The resource's status reveals the location(s) of the cached disk(s). For a more in-depth look on how the disks are actually cached, please refer to PR #823. If the `VirtualMachineImageCache` object is not ready with the cached OVF or disks, then the VM will be re-enqueued once the `VirtualMachineImageCache` _is_ ready. Please note, while placement is required to know where to cache the disks, additional placement calls are not issued if a VM is actively awaiting a `VirtualMachineImageCache` resource. Beyond that, the create VM workflow depends on the fast-deploy mode: 1. The cached disks are copied into the VM's folder. 2. The ConfigSpec is updated to reference the disks. a. Please note, if the VM is encrypted, the disks are not as part of the create call. This is because it is not possible to change the encryption state of disks when adding them to a VM. Thus the disks are encrypted after the VM is created, before it is powered on. 3. The `CreateVM_Task` VMODL1 API is used to create the VM. 1. The `VirtualDisk` devices in the ConfigSpec used to create the VM are updated with `VirtualDiskFlatVer2BackingInfo` backings that specify a parent backing which refers to the cached, base disk from above. The path to each of the VM's disks is constructed based on the index of the disk, ex.: `[<DATASTORE>] <KUBE_VM_OBJ_UUID>/<KUBE_VM_NAME>-<DISK_INDEX>.vmdk`. 2. The `CreateVM_Task` VMODL1 API is used to create the VM. Because the the VM's disks have parent backings, this new VM is effectively a linked clone.
- Loading branch information