Skip to content

Latest commit

 

History

History
319 lines (252 loc) · 12.5 KB

sc02-tc03-onboard-tsm-api.md

File metadata and controls

319 lines (252 loc) · 12.5 KB

SC02-TC03: Onboarding a Kubernetes Cluster to Tanzu Service Mesh (TSM) - Using TSM REST API

This scenario captures how a Kubernetes cluster can be onboarded to Tanzu Service Mesh (TSM).


Test Case Summary

This scenario test case captures how to onboard a Kubernetes cluster to Tanzu Service Mesh (TSM) with REST API Calls. This is useful for those looking to automate their Kubernetes cluster onboarding.


Useful documentation


Prerequisites

  • Completion of Validating TSM Console Access SC01-TC01
  • Completion of API Token Generation and Authentication to the CSP SC01-TC03
  • Valid kubeconfig for targeted Kubernetes Cluster ${KUBERNETES_CLUSTER1}

Test Procedure

  1. If needed renew your Authentication to the CSP SC01-TC03

    export CSP_AUTH_TOKEN=$(curl -k -X POST "https://console.cloud.vmware.com/csp/gateway/am/api/auth/api-tokens/authorize" -H "accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" -d "refresh_token=${CSP_API_TOKEN}" | jq -r '.access_token')
    
  2. Begin onboarding your Kubernetes Cluster by retrieiving the TSM onboarding url. Execute the following REST API call by using your given TSM POC server value for the ${TSM_SERVER_NAME} variable and the access_token obtained from the previous step as the value for the ${CSP_AUTH_TOKEN} variable.

    curl -k -X GET "https://${TSM_SERVER_NAME}/tsm/v1alpha1/clusters/onboard-url" -H "csp-auth-token:${CSP_AUTH_TOKEN}"
    

    Expected:

    {
        "url":"https://${TSM_SERVER_NAME}/cluster-registration/k8s/operator-deployment.yaml"
    }
  3. The TSM onboarding url obtained in the previous step contains the needed Kubernetes manifests/objects and custom resource definitions (CRDs) for installing Tanzu Service Mesh components into your cluster. To install these components first confirm you are connected the right Kubernetes cluster ${KUBERNETES_CLUSTER1_CONTEXT}, if working from the supplied Management container you can run the following:

    kubectx
    

    Expected:

    tkc-aws-1-admin@tkc-aws-2
    tkc-aws-3-admin@tkc-aws-3
    ${KUBERNETES_CLUSTER1_CONTEXT}

    NOTE: If needed to change to the ${KUBERNETES_CLUSTER1_CONTEXT} context running the following.

    kubectx ${KUBERNETES_CLUSTER1_CONTEXT}
    

    Otherwise, if not using the supplied Management Container, run the following:

    kubectl config current-context

    NOTE: If needed to change to the ${KUBERNETES_CLUSTER1_CONTEXT} context running the following.

    kubectl config set-context ${KUBERNETES_CLUSTER1_CONTEXT}
  4. Confirm your preferred namespace is set to ${KUBERNETES_CLUSTER1_NAMESPACE} (Using default as the namespace works fine.), if working from the supplied Management container you can run the following:

    kubens
    

    Expected:

    ...
    ${KUBERNETES_CLUSTER1_NAMESPACE}
    istio-system
    kapp-controller
    kube-node-lease
    kube-public
    ...
    

    NOTE: If needed to change to the ${KUBERNETES_CLUSTER1_NAMESPACE} namespace running the following.

    kubens ${KUBERNETES_CLUSTER1_NAMESPACE}
    

    Otherwise, if not using the supplied Management Container, run the following:

    kubectl config view --minify --output 'jsonpath={..namespace}'; echo

    NOTE: If needed to change to the ${KUBERNETES_CLUSTER1_NAMESPACE} namespace running the following.

    kubectl config set-context --current --namespace=${KUBERNETES_CLUSTER1_NAMESPACE}
  5. Having validated the proper kubectl context apply TSM onboarding url file reference to your Kubernetes cluster with the following commands.

    kubectl apply -f ${TSM_ONBOARDING_URL}
    

    Expected:

    namespace/vmware-system-tsm created
    customresourcedefinition.apiextensions.k8s.io/aspclusters.allspark.vmware.com created
    customresourcedefinition.apiextensions.k8s.io/clusters.client.cluster.tsm.tanzu.vmware.com created
    customresourcedefinition.apiextensions.k8s.io/tsmclusters.tsm.vmware.com created
    customresourcedefinition.apiextensions.k8s.io/clusterhealths.client.cluster.tsm.tanzu.vmware.com created
    configmap/tsm-agent-operator created
    serviceaccount/tsm-agent-operator-deployer created
    clusterrole.rbac.authorization.k8s.io/tsm-agent-operator-cluster-role created
    role.rbac.authorization.k8s.io/vmware-system-tsm-namespace-admin-role created
    clusterrolebinding.rbac.authorization.k8s.io/tsm-agent-operator-crb created
    rolebinding.rbac.authorization.k8s.io/tsm-agent-operator-rb created
    deployment.apps/tsm-agent-operator created
    serviceaccount/operator-ecr-read-only--service-account created
    secret/operator-ecr-read-only--aws-credentials created
    role.rbac.authorization.k8s.io/operator-ecr-read-only--role created
    rolebinding.rbac.authorization.k8s.io/operator-ecr-read-only--role-binding created
    Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
    cronjob.batch/operator-ecr-read-only--renew-token created
    job.batch/operator-ecr-read-only--renew-token created
    job.batch/update-scc-job created
    

  6. Submit the a request to onboard your Kubernetes cluster with the following command. For the ${TSM_SERVER_NAME} value add your given TSM POC server. For ${CLUSTER_NAME} you can use whatever name you want here but it would make sense to make it the same as the cluster context name you use for the kube configuration (NOTE: There are restrictions on the allowed characters for a cluster name, use all lower case letters and dashes). Use the auth_token value from previous authentication step for ${CSP_AUTH_TOKEN}.

    curl -k -X PUT "https://${TSM_SERVER_NAME}/tsm/v1alpha1/clusters/${CLUSTER_NAME}?createOnly=true" -H "csp-auth-token:${CSP_AUTH_TOKEN}" -H "Content-Type: application/json" -d '
    {
        "displayName": "'"${CLUSTER_NAME}"'",
        "description": "",
        "tags": [],
        "autoInstallServiceMesh": false,
        "enableNamespaceExclusions":true,
        "namespaceExclusions": [{
            "match": "kapp-controller",
            "type": "EXACT"
        },{
            "match": "kube-node-lease",
            "type": "EXACT"
        },{
            "match": "kube-public",
            "type": "EXACT"
        },{
            "match": "kube-system",
            "type": "EXACT"
        },{
    ...    
        # ADD/REMOVE NAMESPACES YOU WANT TO EXCLUDE
    ...    
        }]
    }'

    Expected:

    {
        "displayName": "${CLUSTER_NAME}",
        "description": "",
        "tags": [],
        "labels": [],
        "autoInstallServiceMesh": false,
        "enableNamespaceExclusions": true,
    ...
        "proxyConfig": { "password": "**redacted**" },
        "id": "${CLUSTER_NAME}",
        "token": "REDACTED",  # <--------------------- ONBOARDING TOKEN HERE
        "registered": false,
        "systemNamespaceExclusions":
    ...
    }
  7. Generate a private secret to allow TSM to establish secure connection to the global TSM control plane. Run the following command with the token value from the previous step for the ${TSM_ONBOARDING_TOKEN} variable.

    kubectl -n vmware-system-tsm create secret generic cluster-token --from-literal=token=${TSM_ONBOARDING_TOKEN}
    

    Expected:

    secret/cluster-token created
    

  8. Validate that TSM was able to make a secure connection to the global TSM control plane. For the ${TSM_SERVER_NAME} value add your given TSM POC server. For ${CLUSTER_NAME} use the name you provided in the previous steps. Use the auth_token value from previous authentication step for ${CSP_AUTH_TOKEN}.

    curl -k -X GET "https://${TSM_SERVER_NAME}/tsm/v1alpha1/clusters/${CLUSTER_NAME}" -H "csp-auth-token:${CSP_AUTH_TOKEN}"
    

    Expected:

    {
        "displayName": "${CLUSTER_NAME}",
        "description": "",
        "tags": [],
        "labels": [],
        "autoInstallServiceMesh": false,
        "enableNamespaceExclusions": true,
        "namespaceExclusions": [
        ...
        ],
        "proxyConfig": {
            "password": "**redacted**"
        },
        "id": "${CLUSTER_NAME}",
        "name": "${CLUSTER_NAME}",
        "type": "Kubernetes",
        "version": "v1.21.8+vmware.1",
        "status": {
            "state": "Connected",  # <---------------------  MAKE SURE THIS SAYS CONNECTED
            "metadata": {
            "substate": "",
            "progress": 0
            },
            "code": 0,
            "message": "Cluster registration succeeded",
            "updateTimestamp": "2022-06-23T19:47:14Z"
        },
    ...
    }
  9. When the status.state field shows Connected from the step above then you can install TSM to your Kubernetes cluster. For the ${TSM_SERVER_NAME} value add your given TSM POC server. For ${CLUSTER_NAME} use the name you provided in the previous steps. Use the auth_token value from previous authentication step for ${CSP_AUTH_TOKEN}. To install the latest TSM version use default as a value for ${TSM_VERSION}.

    curl -k -X PUT "https://${TSM_SERVER_NAME}/tsm/v1alpha1/clusters/${CLUSTER_NAME}/apps/tsm" -H "csp-auth-token:${CSP_AUTH_TOKEN}" -H "Content-Type: application/json" -d '{"version": "${TSM_VERSION}"}'
    

    Expected:

    {
        "id":"<REDACTED>"
    }
  10. The TSM installation can take a couple minutes. To check/validate the status of the installation run the following. For the ${TSM_SERVER_NAME} value add your given TSM POC server. For ${CLUSTER_NAME} use the name you provided in the previous steps. Use the auth_token value from previous authentication step for ${CSP_AUTH_TOKEN}.

    curl -k -X GET "https://${TSM_SERVER_NAME}/tsm/v1alpha1/clusters/${CLUSTER_NAME}" -H "csp-auth-token:${CSP_AUTH_TOKEN}"
    

    Expected:

    {
    ...
        "status": {
            "state": "Installing", # <---------------------  TSM is INSTALLING
            "metadata": {
            "substate": "",
            "progress": 60
            },
            "code": 0,
            "message": "Installing mesh dependencies...", # <--------------------- PROGRESS MESSAGING
            "updateTimestamp": "2022-06-23T19:57:03Z"
        },
    ...
    }

    State should go from Installing to Ready when TSM installation is complete.

    {
    ...
        "status": {
            "state": "Ready", # <---------------------  MAKE SURE THIS GOES FROM `Installing` TO `Ready`
            "metadata": {
            "substate": "",
            "progress": 0
            },
            "code": 0,
            "message": "",
            "updateTimestamp": "2022-06-23T19:57:53Z"
        },
    ...
    }
  11. Validate an external Loadbalancer was created

    kubectl get svc -A | grep LoadBalancer
    

    Expected:

    istio-system              istio-ingressgateway            LoadBalancer   100.68.30.11     <REDACTED>.us-west-2.elb.amazonaws.com   15021:31714/TCP,80:31268/TCP,443:32006/TCP   11d
    


Status Pass/Fail

  • [ ] Pass
  • [ ] Fail

Return to Test Cases Inventory