Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add functional tests #84

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion automation/check-patch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,12 @@
source "${0%/*}/common.sh"

main() {
echo "TODO: add tests"
export KUBEVIRT_PROVIDER=os-3.10.0
timeout \
--foreground \
--kill-after 5m \
30m \
"${0%/*}/test.sh"
}

[[ "${BASH_SOURCE[0]}" == "$0" ]] && main "$@"
77 changes: 77 additions & 0 deletions automation/test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
#!/bin/bash -xe

kcmd() {
local kubevirt="${KUBEVIRT_REPO_PATH:?Please export KUBEVIRT_REPO_PATH variable}"

( cd "$kubevirt"; "$@" )
}

cluster_up() {
kcmd make cluster-up
}

cluster_down() {
kcmd make cluster-down
}

cluster_sync() {
local registry_port
registry_port="$(get_public_port 5000)"
local num_of_nodes=${KUBEVIRT_NUM_NODES:-1}
local image_name="registry:5000/kubevirt-apb/kubevirt"
local node

make apb_build DOCKERHOST="localhost:${registry_port}"
make docker_push DOCKERHOST="localhost:${registry_port}"

for ((i=1; i<=num_of_nodes; i++)); do
node="node$(printf "%02d" "$i")"
kcmd cluster/cli.sh ssh "$node" sudo docker rmi "$image_name" || :
kcmd cluster/cli.sh ssh "$node" sudo docker pull "$image_name"
done
}

get_public_port() {
local provider="${KUBEVIRT_PROVIDER:?Please export KUBEVIRT_PROVIDER variable}"
local private_port="${1:?}"
local socket

socket="$(docker port "${provider}-dnsmasq" "$private_port")"

echo "${socket##*:}"
}

_kubectl() {
kcmd cluster/kubectl.sh "$@"
}

run_apb() {
local tempfile && tempfile="$(mktemp)"

sed \
-e 's/docker.io/registry:5000/g' \
-e 's/ansibleplaybookbundle/kubevirt-apb/g' \
templates/kubevirt-apb.yml \
> "$tempfile"

_kubectl create -f "$tempfile"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

serviceinstance can end up in several states, I usually wait until status.provisionStatus == Provisioned with command
oc get serviceinstances -n kube-system kubevirt -o template --template "{{.status.provisionStatus}}"

and then if this works it doesn't mean that all went good ... I already saw it several times that all tasks were skip and APB just did nothing successfully :-/ ...

The easiest sanity is probably checking version of kubevirt via API curl -X GET -H "Authorization: Bearer $(oc whoami -t)" -k https://localhost:8443/apis/subresources.kubevirt.io/v1alpha2/version

}

get_kubevirt() {
local kubevirt_tag="v0.7.0-alpha.5"
local kubevirt_repo_url="https://github.com/kubevirt/kubevirt.git"
readonly KUBEVIRT_REPO_PATH="$(mktemp -dt kubevirt.XXX)"

git clone "$kubevirt_repo_url" "$KUBEVIRT_REPO_PATH"
kcmd git checkout "$kubevirt_tag"
}

main() {
[[ "$KUBEVIRT_REPO_PATH" ]] || get_kubevirt
cluster_up
trap "cluster_down" EXIT
cluster_sync
run_apb
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good also run deprovisioning, I saw it broken many times too.
It is about running delete on serviceinstance which you created in run_apb .

Then you need to wait until kubevirt serviceinstance disappear from cluster and there are no kubevirt pods.

}

[[ "${BASH_SOURCE[0]}" == "$0" ]] && main "$@"