Skip to content

Latest commit

 

History

History
210 lines (141 loc) · 9.25 KB

sc02-tc01.md

File metadata and controls

210 lines (141 loc) · 9.25 KB

SC02-TC01: Provision a Tanzu Kubernetes Cluster Using the Tanzu Kubernetes Grid Service

A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes software that is packaged, signed, and supported by VMware. In the context of vSphere with Kubernetes, DevOps Engineers can use the Tanzu Kubernetes Grid Service to provision Tanzu Kubernetes clusters on the Supervisor Cluster. They can invoke the Tanzu Kubernetes Grid Service API declaratively by using kubectl and a YAML definition. A Tanzu Kubernetes cluster resides in a Supervisor Namespace and DevOps and Developers can deploy workloads and services to Tanzu Kubernetes clusters using the same tools as they would with standard Kubernetes clusters.


Test Case Summary

This test case procedure demonstrates a DevOps Engineer creating a Tanzu Kubernetes cluster in the SC namespace. Next, it demonstrates the DevOps Engineer accessing the TKC cluster API, authenticating, and browsing API resources and default settings. Lastly, it demonstrates visibility into the TKC cluster from the perspective of the vSphere Administrator.


Prerequisites

  • Completion of SC01-TC01, SC01-TC02
  • vSphere Administrator console and user credentials
  • DevOps Engineer console and user credentials
  • VC content library subscribed/imported for TKC nodes images and synchronized

Test Procedure

  1. Using the vSphere Administrator console and credentials, login to the VC Web UI. Navigate to Menu > Workload Management. Select the Clusters tab then, select the vSphere_Cluster_Name hyperlink.

  2. Select the Configure tab and under Namespaces, select General. In the Content Library section, select Edit. Then select the radio button next to the content library hosting the TKC node images. Select OK to save and close the window.

  3. Using the DevOps Engineer console and credentials, login to the SC API.

    kubectl vsphere login --vsphere-username ${DEVOPS_USER_NAME} --server=https://${SC_API_VIP} --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace ${SC_NAMESPACE_01}
    

    Expected:

    Logged in successfully. 
    You have access to the following contexts:
    ${SC_API_VIP}
    ${SC_NAMESPACE_01}
  4. Confirm you are logged into the ${SC_NAMESPACE_01} namespace

    kubectl config current-context
    

    Expected:

    ${SC_NAMESPACE_01}
  5. Identify the vSphere storage-policy name, also the Kubernetes storage class name, by describing the namespace storage quota. Record the resource name, which is Storage_Resource_Name.storageclass.storage.k8s.io/requests.storage; omit the first "." and everything to the right of it.

    kubectl describe resourcequota ${SC_NAMESPACE_01}-storagequota
    
  6. List the virtual machine image class options with the command

    kubectl get virtualmachineclasses
    
  7. List the virtual machine images available in the content library with the command

    kubectl get virtualmachineimages
    
  8. Describe the virtualmachineimage(s) to identify details such as the Kubernetes version and vSphere compatibility

    kubectl describe virtualmachineimage VirtualMachine_Image_Name
  9. Using a text-editor, open the tkc01-small.yaml file. Update spec.distribution.version with the VirtualMachine_Image_Name identified in Step 7. Update spec.topology.controlPlane.storageClass, spec.topology.workers.storageClass, and spec.settings.storage.defaultClass with the Storage_Resource_Name identified in Step 5. Note: Enter the storage policy name in all lower-case. For example, "wrkldmgmt-storage-policy". Save and close the file.

    Note: Only enter the latter portion of the VirtualMachine_Image_Name starting with "v" and everything to the right. For example, "v1.18"

  10. Apply the TanzuKubernetesCluster spec to the SC namespace, ${SC_NAMESPACE_01}

    kubectl apply -f scenarios/devops/tkc01-small.yaml
    

    NOTE: To delete a TanzuKubernetes Cluster simply run the following

    kubectl delete TanzuKubernetesCluster [cluster-name] -n [namespace-name]

  11. Monitor the TKC cluster deployment progress with the following commands. This step is complete when the output from kubectl get tanzukubernetesclusters tkc01-small -w reports PHASE as running and kubectl describe tanzukubernetesclusters tkc01-small reports the Node Status for all nodes as ready.

    kubectl get tanzukubernetesclusters -w
    

    Expected:

    NAME          CONTROL PLANE   WORKER   DISTRIBUTION                      AGE     PHASE
    ...
    tkc01-small   1               1        v1.18.19+vmware.1-tkg.1.17af790   7m37s   running
    
    kubectl describe tanzukubernetescluster tkc01-small 
    

    Expected:

    ...
      Node Status:
    tkc01-small-control-plane-8mhw9:             ready
    tkc01-small-workers-s8fxw-65df5bc489-zpzbv:  ready
    ...
    
  12. Using the DevOps Engineer console and credentials, login to the newly created tkc01-small Tanzu Kubernetes Cluster

    kubectl vsphere login --vsphere-username ${DEVOPS_USER_NAME} --server=https://${SC_API_VIP} --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace ${SC_NAMESPACE_01} --tanzu-kubernetes-cluster-name tkc01-small
    

    Expected:

    Logged in successfully. 
    You have access to the following contexts:
    ${SC_API_VIP}
    ${SC_NAMESPACE_01}
    tkc01-small
  13. Confirm you are logged into the tkc01-small Tanzu Kubernetes Cluster context

    kubectl config current-context
    

    Expected:

    tkc01-small
  14. Verify status of SC, Kubernetes master and URL with the command

    kubectl cluster-info
    

    Expected:

    Kubernetes master is running at https://${SC_API_VIP}:6443
    KubeDNS is running at https://${SC_API_VIP}:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
  15. Verify status for all control-plane nodes and worker nodes

    kubectl get nodes
    

    Expected:

    NAME                                         STATUS   ROLES    AGE   VERSION
    tkc01-small-control-plane-8mhw9              Ready    master   13m   v1.18.19+vmware.1
    tkc01-small-workers-s8fxw-65df5bc489-zpzbv   Ready       10m   v1.18.19+vmware.1
    
  16. Review the available TKC API resources with the command

    kubectl api-resources
    
  17. Review the available TKC API versions with the command

    kubectl api-versions
    
  18. Run to verify all TKC pods report either “Running” or “Completed”

    kubectl get pods -A
    
  19. Verify the default storage class is set to the vSphere storage policy with the command

    kubectl get sc
    
  20. Review the default clusterRoleBindings with the following command. Identify the clusterrolebindings for group:vsphere.local:administrators and group:Domain_Name:DevOps_GroupName or if user-level permissions set on namespace, user:Domain_Name:DevOps_UserName

    kubectl get clusterrolebindings
    
  21. View the associated ClusterRole and Subject for group:vsphere.local:administrators with the command

    kubectl describe clusterrolebinding vmware-system-auth-sync-wcp:${SC_NAMESPACE_01}:group:vsphere.local:administrators
    
  22. View the associated ClusterRole and Subject for group:Domain_Name:DevOps_GroupName or user:Domain_Name:DevOps_UserName with the command

    kubectl describe clusterrolebinding vmware-system-auth-sync-wcp:${SC_NAMESPACE_01}:group:Domain_Name:DevOps_GroupName

    Note: If applying user-level permissions, in the above command replace group:Domain_Name:DevOps_GroupName with user:Domain_Name:DevOps_UserName

  23. Using the vSphere Administrator console and credentials, login to the vCenter Web UI and navigate to Menu > Workload Management. Select the Clusters tab then, select the vSphere_Cluster_Name hyperlink. In the left pane, expand the Namespaces resource pool, then select the ${SC_NAMESPACE_01} namespace.

  24. From the ${SC_NAMESPACE_01} Summary view, verify Tanzu Kubernetes tile reports one (1) Tanzu Kubernetes Cluster, and one (1) healthy, control plane node

  25. Select the Compute tab, expand VMware Resources, and select Tanzu Kubernetes. Review the summary information for tkc01-small confirm Phase reports Running

  26. In the left pane, expand the ${SC_NAMESPACE_01} object and select the tkc01-small resource pool

  27. From tkc01-small Summary view, expand Resource Setting to view CPU and Memory configuration

  28. Select the Monitor tab > Utilization to view TKC cluster resources usage

  29. Select the VMs tab to list all TKC cluster nodes and verify the Status reports Normal


Status Pass/Fail

  • [ ] Pass
  • [ ] Fail

Return to Test Cases Inventory