Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIS 5.1.3 policy produces a violation for the cluster-admin cluster role - when enforced by Gatekeeper this bricks the cluster #23

Open
rsalmond opened this issue Aug 9, 2021 · 0 comments
Labels
bug Something isn't working

Comments

@rsalmond
Copy link

rsalmond commented Aug 9, 2021

Details

What steps did you take and what happened:

We used the CIS 5.1.3 rego in an OPA gatekeeper policy to prevent creation of roles / cluster roles which utilize wildcards.

What did you expect to happen:

The cluster would remain functional.

Anything else you would like to add:

The K8s API periodically checks to ensure the bootstrap roles (admin, cluster-admin, etc.) are present after the control plane comes up. If they are not present, it attempts to reconcile them using a post start hook. If a post start hook cannot complete, the /healthz endpoint starts to return failures. In a managed K8s setting like EKS this causes the load balancer sitting in front of the API to stop routing traffic to it, rendering the cluster dead.

Additional Information:

  1. cluster-admin uses wildcards.
  2. Turns out to be tricky for AWS to correct this, we had to destroy and recreate our cluster to proceed with our Rego and Gatekeeper testing. Hopefully this issue helps folks avoid this in the future.
@rsalmond rsalmond added the bug Something isn't working label Aug 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant