Skip to content

Milestone 2

milanchheta edited this page Apr 11, 2020 · 39 revisions

Task Summary for Milestone 2 (All the work on top of Milestone 1)

  • Containerized each service using docker.
  • Created a Kubernetes cluster with a master node and two worker nodes(VMs created on JetStream).
  • Deployed micro-services on worker pods using deployment files.
  • Built the CI/CD pipeline using TravisCI
  • On every successful TravisCI build, TravisCI connects to the master node of the Kubernetes cluster via ssh with encrypted key and updates the associated pod by running the shell file at the corresponding node.

Commits can be made to the following branches to test the CI/CD pipeline (these branches can used to refer our updated code for each deployed microservices):

Running the application on VM Kubernetes cluster

Our website is deployed at: http://149.165.170.101:30002

Architecture

Setting up a VM cluster from scratch

We have setup one master node and 2 worker nodes of kubernetes across three VMs using commands given at https://iujetstream.atlassian.net/wiki/spaces/JWT/pages/35913730/OpenStack+command+line.

We followed this https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04 guide to setup the cluster.

We have our master and worker nodes on the following IPs:

  • Master node: 149.165.170.101
  • Worker 1: 149.165.170.249
  • Worker 2: 149.165.169.39
  • INFO: The master node should not be deleted as part of testing.

All the microservices are running in the autoscale configuration where pods autoscale depending on the load keeping the average utilization at 50% per pod.

You will have to get your public key added to our VM instance before you can make ssh connection (provide us with your ssh public key for the same). You can ssh at [email protected] and run kubectl get nodes to check the running nodes. Find instructions here

Setting up your VM cluster with our automated scripts

Prerequisites

->  Already created Security group, network, router, subnet and verified ssh key on jetstream
->  Public and private ssh keys that are used for OpenStack jetstream and openrc.sh (named as opensh.rc) config file 

Steps to follow

  • git clone -b scripts https://github.com/airavata-courses/Orenda.git
  • Place your openrc.sh file, public and private keys in git cloned folder
  • cd automation/shellScripts
  • run 'bash preProcess.sh' (Windows user can use git bash) and follow the instructions given on terminal (names should be provided properly)

After the above instructions have completed execution you will get master and workers ip address

  • Add the displayed master and worker ip address to the Kubernetes/hosts file.

  • Give path for private ssh key for 'ansible_ssh_private_key_file=' in automation/kubernetes/hosts file.

  • Replace path for the public key in the last line in automation/kubernetes/initial.yml file.

  • run 'bash ansible.sh'

  • After the above command is executed, your Kubernetes cluster is created and you can verify that by using the below command: ssh -i <--sshKeyName--> [email protected](use master node's ip). Find instructions here

To verify the creation of master and worker nodes in Kubernetes cluster:
Run 'kubectl get nodes'
  • After your cluster is created, copy/create the 'deployFiles' folder and all files inside it in your master machine.
  • Run bash "/path/to/file/deploy.sh" at your master machines terminal. All the micro-services will be deployed with autoscaling from 1 to 10 pods depending on the load keeping the average utilization at 50% per pod.
  • You can check the pods by running 'kubectl get pods'
  • You can access the services at [masterIP]:30002

Performance Testing using JMeter

  • pyImgur is being used to store the plots of radar files. It allows only a limited number of requests to be processed and declines the rest.
  • You can test yourself using this file on Jmeter.

1 Replica:

Login

Graph

1000 threads

Summary Report

1000 threads

Register

Graph:

500 threads

Summary Report:

500 threads

Dashboard

Graph:

1000 threads

Summary Report:

1000 threads

3 Replicas:

Login

Graph:

700 threads

Summary Report:

700 threads

Register

Graph:

500 threads

Summary Report:

500 threads

Dashboard

Graph:

1500 threads

Summary Report:

1500 threads

5 Replicas:

Login

Graph:

700 threads

Summary Report:

700 threads

Register

Graph:

500 threads

Summary Report:

500 threads

Dashboard

Graph:

1000 threads

Summary Report:

1000 threads

Elastic Resources:

Login

Graph:

700 threads

Summary Report:

700 threads

Register

Graph:

700 threads

Summary Report:

700 threads

Dashboard

Graph:

1500 threads

Summary Report:

1500 threads