Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
baremetal: improve debuggability of ipi deployments #328
baremetal: improve debuggability of ipi deployments #328
Changes from 1 commit
a3bf88f
45ad3cd
0b73120
8b05817
1440b6f
545d738
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly the combination that we need, thanks -- I think we can improve the terraform error messages for the general case, and use 3535 to add OpenShift context where appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openshift/installer#2569
^^ already looking at making these problems more easy to report in the long term.
For now the installer now has list of common failures and how to identity them in https://github.com/openshift/installer/blob/master/docs/user/troubleshootingbootstrap.md#common-failures
the goal is to curate a list of detectable failures and then automatically do it as part of analysis.
the initial approach in 2569 was that, you show users most failure logs from the bundle and let them decide for themselves, but personally i would like us to come up with common known failure list and then just show this was the error, and here's how you might resolve this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's on the operator owners to make sure the errors are clear. Think about how it is not only installer that is the consumer of these message, but also admins during upgrades.
So personally the goal should be to ensure that each operator is responsible for using clear error messages in the status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, it's just on day 1 the worker deployment failure seems to be special to me. It causes a lot of noise as a bunch of operators start reporting error messages that make it hard to point to a root cause unless you've seen the problem before. I don't think machine-api operator even reports anything useful when this happens, but if it did, it'd get lost in mix of the many other failing operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we take ingress/console operator as an example - if the worker fails, they will be in error - but, from my experience of installing openshift for first few times - the user will have no idea that this is the reason. He will just see that those operators are down.
What is possible maybe to do is to have some kind of 'validators' - either from the installer binary or as an operator - that can analyze logs or cluster runtime state (with minimal requirement for cluster functionality - such as passwordless ssh between nodes) that can look into the state of the cluster and explain the user what went wrong. If we provide an infra for writing those validators, then operators owners / qe / intergration team will be able to enhance those once they rn into an issue that it was hard to analyze.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they asked for a certain number of works and they didn't get them that seems reasonable to have a special error for that.
I also think some generic orientation regarding how to investigate operators failing may help as well. They'll need to learn that skill eventually no matter what. So documenting how to look at an Operator's status and referencing that seems to be something worth doing no matter what. Ingress for example often tells you that the dns entry doesn't exist but people don't even know where to look for that.