-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-42523: allow single MCP match per node group by default #1055
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: shajmakh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
78dda6f
to
e6e89b4
Compare
Because of historical accident we have a 1:N mapping between NodeGroup and MCPs (MachineConfigPool*s* is a slice!). One of the key design assumptions in NROP is the 1:1 mapping between NodeGroups and MCPs. Since we want to conserve the backward compatibility we still need to support multiple MCP matches per tree; however, since by design the intention was to have a single matching MCP per node group, make this the default behavior. To re-enable the old behavior permitting multiple pools matches, use the following annotation: `experimental.multiple-pools-per-tree: enabled` Experiment the effects of the new default for the next 2 releases. If there are no complaints, we shall remove the multiple MCP match logic. Signed-off-by: Shereen Haj <[email protected]>
Add tests to cover scenarios with the new annotations: `config.node.openshift-kni.io/selinux-policy` `experimental.multiple-pools-per-tree` Signed-off-by: Shereen Haj <[email protected]>
e6e89b4
to
715e96c
Compare
/unhold |
@shajmakh: This pull request references Jira Issue OCPBUGS-42523, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/jira refresh |
@shajmakh: This pull request references Jira Issue OCPBUGS-42523, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the idea is fine, but I'd do this in the controller. Warnings in the logs will almost certainly go unnoticed.
Let's degrade the status if we have multiple MCP per NodeGroup, UNLESS the new magic annotation (which should not be documented as part as the public API but kept as magic hidden annotation) is set.
note controllers
may import packages in internal
.
@@ -26,6 +26,7 @@ import ( | |||
mcov1 "github.com/openshift/machine-config-operator/pkg/apis/machineconfiguration.openshift.io/v1" | |||
|
|||
nropv1 "github.com/openshift-kni/numaresources-operator/api/numaresourcesoperator/v1" | |||
"github.com/openshift-kni/numaresources-operator/internal/api/annotations" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can't import internal
packages in public packages. We will need to review the code arrangement.
Because of historical accident we have a 1:N mapping between NodeGroup and MCPs (MachineConfigPools is a slice!). One of the key design assumptions in NROP is the 1:1 mapping between NodeGroups and MCPs.
Since we want to conserve the backward compatibility we still need to support multiple MCP matches per tree; however, since by design the intention was to have a single matching MCP per node group, make this the default behavior. To re-enable the old behavior permitting multiple pools matches, use the following annotation:
experimental.multiple-pools-per-tree: enabled
Experiment the effects of the new default for the next 2 releases. If there are no complaints, we shall remove the multiple MCP match logic.