-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add functional tests #84
base: master
Are you sure you want to change the base?
[WIP] Add functional tests #84
Conversation
Signed-off-by: gbenhaim <[email protected]>
templates/kubevirt-apb.yml \ | ||
> "$tempfile" | ||
|
||
_kubectl create -f "$tempfile" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
serviceinstance can end up in several states, I usually wait until status.provisionStatus == Provisioned
with command
oc get serviceinstances -n kube-system kubevirt -o template --template "{{.status.provisionStatus}}"
and then if this works it doesn't mean that all went good ... I already saw it several times that all tasks were skip and APB just did nothing successfully :-/ ...
The easiest sanity is probably checking version of kubevirt via API curl -X GET -H "Authorization: Bearer $(oc whoami -t)" -k https://localhost:8443/apis/subresources.kubevirt.io/v1alpha2/version
cluster_up | ||
trap "cluster_down" EXIT | ||
cluster_sync | ||
run_apb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good also run deprovisioning, I saw it broken many times too.
It is about running delete on serviceinstance which you created in run_apb
.
Then you need to wait until kubevirt serviceinstance disappear from cluster and there are no kubevirt pods.
@rthallisey @lukas-bednar Are you familiar with the following error:
|
That seems like the service catalog didn't start correctly. I would print out the pods and see if the catalog is running. |
@gbenhaim I am familiar with two issues related to ASB
---
- name: "Wait until ASB is running"
shell: |
set -o pipefail
set -e
# If asb-X-deploy ended in Error, remove it and redeploy it again
if oc get pods -n openshift-ansible-service-broker | grep deploy | grep Error ;
then
oc delete pods -n openshift-ansible-service-broker $(oc get pods -n openshift-ansible-service-broker | grep deploy | cut -f 1 -d ' ')
oc rollout latest asb -n openshift-ansible-service-broker
fi
# If ASB is not running exit with 1
if ! oc get pods -n openshift-ansible-service-broker | grep -v deploy | grep Running ;
then
exit 1
fi
register: asb_status
until: asb_status.rc == 0
retries: 60
delay: 15
---
- name: "Apply WA for https://github.com/openshift/ansible-service-broker/issues/876"
shell: |
set -e
oc annotate route -n openshift-ansible-service-broker asb-1338 --overwrite haproxy.router.openshift.io/timeout=300s |
re-org: Move openshift deployment to playbooks dir
Signed-off-by: gbenhaim [email protected]