Skip to content

Latest commit

 

History

History
250 lines (162 loc) · 11.7 KB

sc01-tc03.md

File metadata and controls

250 lines (162 loc) · 11.7 KB

SC01-TC03: Create and Configure a Supervisor Cluster Namespace(s) with Resource Limitations

A namespace sets the resource boundaries where vSphere Pods and Tanzu Kubernetes clusters created by using the Tanzu Kubernetes Grid Service can run. When initially created, the namespace has unlimited resources within the Supervisor Cluster. vSphere administrators can set limits for CPU, memory, storage, as well as the number of Kubernetes objects that can run within the namespace. Workload Management creates a resource pool for each namespace in vSphere and storage quotes in Kubernetes for enforcing storage limitations.


Test Case Summary

This test procedure demonstrates centralized management and monitoring, and enforcement with namespace resource limits. It demonstrates deploying Kubernetes workloads within the constraints of namespace quotas.


Prerequisites

  • Completion of SC01-TC01, SC01-TC02
  • vSphere Administrator console and user credentials
  • DevOps Engineer console and user credentials

Test Data

Container Image: nginx:latest


Test Procedure

  1. Using the vSphere Administrator console and credentials, login to the VC Web UI and navigate to Menu > Workload Management.

  2. Select the Namespaces tab then the ${SC_NAMESPACE_01} hyperlink.

  3. On the Capacity and Usage tile, select EDIT LIMITS, and enter the following values then, select OK to save and close.:

    • CPU 1 GHz
    • Memory 4 GB
    • Storage 10 GB
  4. Verify the CPU, Memory, and Storage limits update with the entered values.

    Note: If the limits do not update immediately, allow a few seconds for auto-refresh or manually refresh the page.

  5. On the Status tile, under Location, select the vSphere_Cluster_Name

  6. In the left pane, select Namespaces. Next, select the Resource Pools tab. Verify the ${SC_NAMESPACE_01} resource pool CPU and Memory Limit values.

    Expected: CPU Limit (MHz) 1000
    Memory Limit (MB) 4294

  7. Using the DevOps Engineer console and credentials, login to the SC API.

    kubectl vsphere login --vsphere-username ${DEVOPS_USER_NAME} --server=https://${SC_API_VIP} --insecure-skip-tls-verify
    

    Expected:

    Logged in successfully. 
    You have access to the following contexts:
    ${SC_API_VIP}
    ${SC_NAMESPACE_01}
  8. Set the context for the ${SC_NAMESPACE_01} namespace with the command

    kubectl config use-context ${SC_NAMESPACE_01}
    
  9. View the namespace resource limits with the command

    kubectl describe ns ${SC_NAMESPACE_01}
    

    Convert the vmware-system-resource-pool-cpu-limit core value to millicores. For instance, cores x 1000 = millicores.

    Note: As a reference for converting the vmware-system-resource-pool-cpu-limit value, which is described in cores, to the GHz equivalent set in Step 3, run the bash script, sc01/v7k8s-ns-cpu-limit-calc.sh or manually calculate with the following formula: vmware-system-resource-pool-cpu-limit = (vSphere Namespace CPU GHz Limit / (vSphere Cluster GHz Capacity / vSphere Cluster Total Processors))

    Expected:

    ...
    vmware-system-resource-pool-cpu-limit: 0.4760
    vmware-system-resource-pool-memory-limit: 4096Mi
    ...
    Resource Quotas
    Name: ${SC_NAMESPACE_01}
    Resource Used Hard
    -------- --- ---
    requests.storage 0 10Gi
    ...
  10. Using a text-editor, open the file, deploy-resource-quota-test.yaml. Edit the spec.template.spec.containers.resources.requests.cpu value in millicores to exceed the vmware-system-resource-pool-cpu-limit value identified in Step 9. Then, edit the spec.template.spec.containers.resources.limits.cpu value in millicores to twice the value of spec.template.spec.containers.resources.requests.cpu. Save and close the file.

    vi scenarios/operator/deploy-resource-quota-test.yaml
    
  11. Apply the deployment spec to the SC API

    kubectl apply -f scenarios/operator/deploy-resource-quota-test.yaml
    

    Expected:

    deployment.apps/deploy-resource-constraints created
  12. View the deployment status with the command and describe the pod to view its status and related events

    kubectl get deployment deploy-resource-constraints
    

    Expected:

    NAME                          READY   UP-TO-DATE   AVAILABLE   
    deploy-resource-constraints 0/1 1 0
  13. Describe the pods to view its status recent events

    kubectl describe pods -l app=deploy-resourcequota-test
    

    Expected:


    Status: Pending
    ...
    Events:
    Type Reason Age From Message
    ---- ------ ---- ----
    Warning FailedScheduling 3m21s default-scheduler The amount of CPU resource available in the parent resource pool is insufficient for the operation.
  14. Delete the deployment

    kubectl delete deploy deploy-resource-constraints
    
  15. Reopen the file deploy-resource-quota-test.yaml and reduce both the spec.template.spec.containers.resources.requests.cpu and spec.template.spec.containers.resources.limits.cpu values to approximately two-thirds (2/3) the vmware-system-resource-pool-cpu-limit equivalent millicore value, identified in step 9. Save the changes and reapply the deployment spec.

    vi scenarios/operator/deploy-resource-quota-test.yaml
    
    kubectl apply -f scenarios/operator/deploy-resource-quota-test.yaml
    

    Expected:

    deployment.apps/deploy-resource-constraints created
  16. Monitor status of the deployment, allowing up to 60 secs for the nodes to pull the container image

    kubectl get deploy deploy-resource-constraints
    

    Expected:

    NAME                          READY   UP-TO-DATE   AVAILABLE   
    deploy-resource-constraints 1/1 1 1
  17. Scale out the deployment, increasing the number of replicas to three (3).

    kubectl scale deploy deploy-resource-constraints --replicas=3
    

    Expected:

    deployment.apps/deploy-resource-constraints scaled
  18. Monitor status of the deployment, allowing up to 60 secs for the nodes to pull the container image

    kubectl get deploy deploy-resource-constraints
    

    Expected:

    NAME                          READY   UP-TO-DATE   AVAILABLE   
    deploy-resource-constraints 1/3 3 1
  19. List the deployment’s pods.

    kubectl get pods -l app=deploy-resourcequota-test
    

    Expected:

    NAME                            READY   STATUS         
    deploy-resource-constraints-x 0/1 Pending
    deploy-resource-constraints-y 1/1 Running
    deploy-resource-constraints-z 0/1 Pending
  20. Identify the vSphere storage-policy name, also the Kubernetes storage class name, by describing the namespace storage quota. Record the resource name, which is Storage_Resource_Name.storageclass.storage.k8s.io/requests.storage; omit the first "." and everything to the right of it.

    kubectl describe resourcequota ${SC_NAMESPACE_01}-storagequota
    
  21. With a text-editor, open the file pvc-resource-quota-test.yaml. Update the spec.storageClassName with the Storage_Resource_Name, identified in Step 20. Then, set the spec.resources.requests.storage value to 20Gi. Save and close the file.

    vi scenarios/operator/pvc-resource-quota-test.yaml
    
  22. Create a PVC with the command

    kubectl apply -f scenarios/operator/pvc-resource-quota-test.yaml
    

    Expected:

    Error from server (Forbidden): error when creating "pvc-resource-quota-test.yaml": persistentvolumeclaims "pvc-resourcequota-test" is forbidden: exceeded quota: ${SC_NAMESPACE_01}, requested: requests.storage=20Gi, used: requests.storage=0, limited: requests.storage=10Gi
  23. Reopen the file pvc-resource-quota-test.yaml. Change the spec.resources.requests.storage value to 9Gi. Save and close the file.

    vi scenarios/operator/pvc-resource-quota-test.yaml
    
  24. Reattempt to create the PVC with the command

    kubectl apply -f scenarios/operator/pvc-resource-quota-test.yaml
    

    Expected:

    persistentvolumeclaim/pvc-resourcequota-test created
  25. Verify the PVC status and record the volume name

    kubectl get pvc pvc-resourcequota-test
    

    Expected:

    NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    pvc-resourcequota-test Bound pvc-6980d377-f344-48b5-877a-eaefbc3d94b0 9Gi RWO *Storage_Class_Name* 63s
  26. Using the vSphere Administrator console and credentials, login to the VC Web UI and navigate to Menu > Workload Management. Select the Namespaces tab then, select the ${SC_NAMESPACE_01} hyperlink. Confirm that the Capacity and Usage tile values for CPU, Memory, and Storage reflect the resources consumed by the Running pods in the Kubernetes deployment, deploy-resource-constraints and pvc, pvc-resourcequota-test.

    Note: Verify the CPU utilization with the following formulas:

    CPU Utilization GHz Min = ((spec.template.spec.containers.resources.requests.cpu / 1000)* n pods Running)(vSphere Cluster GHz Capacity / vSphere Cluster Total Processors)

    CPU Utilization GHz Max = ((
    spec.template.spec.containers.resources.limits.cpu* / 1000)* n pods Running)*(vSphere Cluster GHz Capacity / vSphere Cluster Total Processors)

    Expected: The Capacity and Usage utilization reports metrics are within the calculated range(s) or 10% of a calculated value.

  27. In the Pods tile, record the number of Running, Pending, and Failed pods.

    Note: The quantity reported under the tile title, Pods, reflects the total number of pods regardless of status. Hover the cursor over the line graph for a pop-up summary that breaks down the number of pods by status.

    Expected: Running: 1
    Pending: 2
    Failed: 0

  28. In the Capacity and Usage tile, select EDIT LIMITS. Increase the CPU value to 2 GHz. Then, select OK to save and close.

  29. Monitor the line graph as the as the Pending pods transition to Running. Allow up to 90 secs for the changes to take effect and the graph to update. If the tile doesn't auto-refresh with the updated pod count, refresh the page. After 90 seconds, record the number of Running, Pending, and Failed pods.

    Expected: Running: 3
    Pending: 0
    Failed: 0

    Actual:

  30. In the Capacity and Usage tile, select EDIT LIMIT. Clear all values so that each line reports No limit. Select OK to save and close the window.

  31. Switch back to the DevOps Engineer console, and clean up the namespace by running the following commands:

    kubectl delete deploy deploy-resource-constraints
    kubectl delete pvc pvc-resourcequota-test
    

Status Pass/Fail

  • [ ] Pass
  • [ ] Fail

Return to Test Cases Inventory