Skip to content

Latest commit

 

History

History
192 lines (164 loc) · 7.57 KB

06-kubernetes-simple-service.md

File metadata and controls

192 lines (164 loc) · 7.57 KB

10.6 Deploying a simple service to Kubernetes

Slides

Creating a cluster with Kind

If you use WSL2 and get the following errors when creating a cluster with kind create cluster

✗ Starting control-plane :joystick:
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
the solution is to change the command:

Specify the node image:

kind create cluster --image kindest/node:v1.23.0

In this section, we'll deploy a simple web application to a kubernates cluster. For that, we'll implement the following steps:

  1. Create a simple ping application in Flask
    • For this, we'll create a directory ping and for pipenv enironment, we'll also create a seperate Pipfile to avoid conflict. Then we need to install flask and gunicorn.
    • We'll use the app that we built in session 5 by copying ping.py and Dockerfile with slight changes and then build the image.
      # ping.py
      from flask import Flask
    
      app = Flask('ping-app')
    
      @app.route('/ping', methods=['GET'])
      def ping():
          return 'PONG'
    
      if __name__=="__main__":
          app.run(debug=True, host='0.0.0.0', port=9696)
    # Dockerfile
    FROM python:3.9-slim
    
    RUN pip install pipenv
    
    WORKDIR /app
    
    COPY ["Pipfile", "Pipfile.lock", "./"]
    
    RUN pipenv install --system --deploy
    
    COPY "ping.py" .
    
    EXPOSE 9696
    
    ENTRYPOINT ["gunicorn", "--bind=0.0.0.0:9696", "ping:app"]
    • To build the image, we need to specify app name along with the tag, otherwise the local kubernates setup kind will cause problems, docker build -t ping:v001 .. Now we can run on docker container and on separate terminal use the command curl localhost:9696/ping to test the application.
  2. Install kubectl and kind to build and test cluster locally
    • We'll install kubectl from AWS because later we deploy our application on AWS: curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl.
    • To install kind to setup local kubernetes setup (executable binaries): wget https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64 -O kind > chmod +x ./kind. Once the utility is installed we need to place this into our $PATH at our preferred binary installation directory.
  3. Setup kubernates cluster and test it
    • First thing we need to do is to create a cluster: kind create cluster (default cluster name is kind)
    • Configure kubectl to interact with kind: kubectl cluster-info --context kind-kind
    • Check the running services to make sure it works: kubectl get service
  4. Create a deployment
    • Kubernates requires a lot of configuration and for that VS Code has a handy extension that can take a lot of hussle away.
    • Create deployment.yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata: # name of the deployment
        name: ping-deployment
      spec:
        replicas: 1 # number of pods to create
        selector:
          matchLabels: # all pods that have the label app name 'ping' are belonged to 'ping-deployment'
            app: ping
        template: # template of pods (all pods have same configuration)
          metadata:
            labels: # each app gets the same label (i.e., ping in our case)
              app: ping
          spec:
            containers: # name of the container
            - name: ping-pod
              image: ping:v001 # docker image with tag
              resources:
                limits:
                  memory: "128Mi"
                  cpu: "500m"
              ports:
              - containerPort: 9696 # port to expose
      • We can now apply the deployment.yaml to our kubernetes cluster: kubectl apply -f deployment.yaml
      • Next we need to load the docker image into our cluster: kind load docker-image ping:v001
      • Excuting the command kubectl get pod should give the pod status running.
      • To test the pod by specifying the ports: kubectl port-forward pod-name 9696:9696 and execute curl localhost:9696/ping to get the response.
  5. Create service for deployment
    • Create service.yaml
      apiVersion: v1
      kind: Service
      metadata: # name of the service ('ping')
        name: ping
      spec:
        type: LoadBalancer # type of the service (external in this case)
        selector: # which pods qualify for forwarding requests
          app: ping
        ports:
        - port: 80 # port of the service
          targetPort: 9696 # port of the pod
      • Apply service.yaml: kubectl apply -f service.yaml
      • Running kubectl get service will give us the list of external and internal services along with their service type and other information.
      • Test the service by port forwarding and specifying the ports: kubectl port-forward service/ping 8080:80 (using 8080 instead to avoid permission requirement) and executing curl localhost:8080/ping should give us the output PONG.
  6. Setup and use MetalLB as external load-balancer
    • Apply MetalLB manifest
      kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
      
    • Wait until the MetalLB pods (controller and speakers) are ready
      kubectl wait --namespace metallb-system \
                   --for=condition=ready pod \
                   --selector=app=metallb \
                     --timeout=90s
      
    • Setup address pool used by loadbalancers:
      • Get range of IP addresses on docker kind network
      docker network inspect -f '{{.IPAM.Config}}' kind
      
      • Create Ip address pool using metallb-config.yaml
        apiVersion: metallb.io/v1beta1
        kind: IPAddressPool
        metadata:
          name: example
          namespace: metallb-system
        spec:
          addresses:
          - 172.20.255.200-172.20.255.250
        ---
        apiVersion: metallb.io/v1beta1
        kind: L2Advertisement
        metadata:
          name: empty
          namespace: metallb-system
    • Apply deployment and service for updates
      kubectl apply -f deployment.yaml
      kubectl apply -f service.yaml
      
    • Get external LB_IP
      kubectl get service
      
    • Test using load-balancer ip address
      curl <LB_IP>:80/ping
      

Notes

Add notes from the video (PRs are welcome)

⚠️ The notes are written by the community.
If you see an error here, please create a PR with a fix.

Navigation