Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initContainers support #308

Merged
merged 2 commits into from
Nov 20, 2024
Merged

Add initContainers support #308

merged 2 commits into from
Nov 20, 2024

Conversation

shlomitubul
Copy link
Contributor

Currently, there is no option to configure Kubecheck to authenticate against OCI registries like GCR/GAR. By adding support for initContainers, users can set up credential helpers such as docker-credential-gcr to generate a config.json for Docker authentication, the initContainer can also copies the docker-credential-gcr binary into the main Kubecheck container, allowing it to successfully fetch tokens and authenticate

Signed-off-by: ShlomiTubul <[email protected]>
@shlomitubul
Copy link
Contributor Author

example in GCP GAR:

 rest of yaml omited...
 volumes:
  - name: registry-config
    emptyDir: {}
  - name: cred-helper
    emptyDir: {} 
  volumeMounts:
  - name: registry-config
    mountPath: /root/.docker
  - name: cred-helper
    mountPath: /usr/bin/docker-credential-gcr
    subPath: docker-credential-gcr
  initContainers:
  - name: gcp-cred-helper
    image: google/cloud-sdk:472.0.0-slim
    command: ["/bin/sh", "-c"]
    args:
      - |
        curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v2.1.25/docker-credential-gcr_linux_amd64-2.1.25.tar.gz" |
        tar xz docker-credential-gcr &&
        chmod +x docker-credential-gcr &&
        cp docker-credential-gcr /usr/bin/ &&
        mv docker-credential-gcr /shared/ &&
        docker-credential-gcr configure-docker --registries=us-central1-docker.pkg.dev
    volumeMounts:
      - name: registry-config
        mountPath: /root/.docker
      - name: cred-helper
        mountPath: /shared
serviceAccount:
  create: true
  enabled: true
  name: kubechecks
  annotations:
    iam.gke.io/gcp-service-account: [email protected]

@orenlevi111
Copy link

I Assume its working for u,
Altough it won't work with temp credentials like Aws Ecr provides.

@shlomitubul
Copy link
Contributor Author

I Assume its working for u, Altough it won't work with temp credentials like Aws Ecr provides.

it works for me.
also, why didn't it work for aws temp credentials?

@orenlevi111
Copy link

Since the creds expire after 12 hours.
And u mount it just once in the initcontainer.

@shlomitubul
Copy link
Contributor Author

shlomitubul commented Nov 18, 2024

Since the creds expire after 12 hours. And u mount it just once in the init container.

I don't mount static creds, I only instruct helm to use cred helper, then each time kubecheck run, then helm dependency build get invoked (technically by argocd lib not kubecheck itself) and in turn he run docker-credential to get fresh token

@djeebus
Copy link
Collaborator

djeebus commented Nov 18, 2024

This looks quite reasonable, and likely is useful for other reasons as well (network sidecars, secret providers, etc). You might also try out the image from the multisource apps PR, as that shifts responsibility for manifest generation back to argocd's repo server. It might resolve your issue in more straightforward way (as I would imagine argocd has already been configured correctly).

Be aware that this also requires direct access to the backend repo service, generally located in the argocd namespace, called argocd-repo-server, on port 8081. If it's not configured that way, you can override it via KUBECHECKS_ARGOCD_REPOSITORY_ENDPOINT. I'd be interested in knowing your results, as this could provide more evidence that the extra complexity is worth the effort. Thanks!

image: ghcr.io/zapier/kubechecks:0.0.0-pr298

Signed-off-by: ShlomiTubul <[email protected]>
@zapier-sre-bot
Copy link
Collaborator

Mergecat's Review

Click to read mergecats review!

😼 Mergecat review of charts/kubechecks/Chart.yaml

@@ -1,7 +1,7 @@
 apiVersion: v2
 name: kubechecks
 description: A Helm chart for kubechecks
-version: 0.4.5
+version: 0.4.6
 type: application
 maintainers:
   - name: zapier

Feedback & Suggestions: The version bump from 0.4.5 to 0.4.6 is appropriate if there are backward-compatible bug fixes or minor improvements. Ensure that the changes in the codebase reflect this version update. Additionally, consider updating the changelog or release notes to document what changes have been made in this new version. 📈


😼 Mergecat review of charts/kubechecks/templates/deployment.yaml

@@ -32,6 +32,10 @@ spec:
       {{- end }}
       securityContext:
         {{- toYaml .Values.deployment.podSecurityContext | nindent 8 }}
+      {{- with .Values.deployment.initContainers }}
+      initContainers:
+        {{- toYaml . | nindent 8 }}
+      {{- end }}
       containers:
         - name: {{ .Chart.Name }}
           {{- with .Values.deployment.image }}
@@ -101,4 +105,4 @@ spec:
       {{- end }}
       {{- with .Values.deployment.volumes }}
       volumes: {{ . | toYaml | nindent 8 }}
-      {{- end }}
+      {{- end }}
\ No newline at end of file

Feedback & Suggestions:

  1. Init Containers Addition: The addition of initContainers is a useful enhancement for pre-processing tasks before the main containers start. Ensure that the .Values.deployment.initContainers is properly defined in your values file to avoid runtime errors. 🛠️

  2. No Newline at End of File: The diff indicates that there is no newline at the end of the file. It's a good practice to end files with a newline to avoid potential issues with certain text editors and tools. Consider adding a newline at the end. 📄


😼 Mergecat review of charts/kubechecks/templates/service.yaml

@@ -3,6 +3,8 @@ apiVersion: v1
 kind: Service
 metadata:
   name: {{ include "kubechecks.fullname" . }}
+  annotations:
+    {{ .Values.service.annotations | toYaml | nindent 4 }}
   labels:
     {{- include "kubechecks.labels" . | nindent 4 }}
 spec:

Feedback & Suggestions:

  1. Annotations Handling: 📝 Ensure that .Values.service.annotations is defined in your values.yaml file to prevent potential errors if the key is missing. Consider providing a default value or handling the case where annotations might not be provided.

  2. Security Consideration: 🔒 Be cautious with the content of annotations, especially if they can be influenced by user input. Annotations can sometimes be used to inject unexpected behavior into Kubernetes resources.

  3. Documentation: 📚 Update any relevant documentation to inform users about the new annotations field and how it can be used. This will help maintain clarity and usability of the chart.


😼 Mergecat review of charts/kubechecks/values.schema.json

@@ -102,6 +102,9 @@
         "securityContext": {
           "type": "object"
         },
+        "initContainers": {
+          "type": "array"
+        },
         "startupProbe": {
           "type": "object"
         },
@@ -192,6 +195,9 @@
         },
         "name": {
           "type": "string"
+        },
+        "annotations": {
+          "$ref": "#/$defs/key-value-map"
         }
       }
     },

Feedback & Suggestions:

  1. initContainers Array Definition:

    • Consider specifying the items type within the initContainers array. This will ensure that each item in the array adheres to a specific structure, enhancing validation and reducing potential errors. For example:
      "initContainers": {
        "type": "array",
        "items": {
          "type": "object",
          "properties": {
            "name": { "type": "string" },
            "image": { "type": "string" }
            // Add other necessary properties
          },
          "required": ["name", "image"]
        }
      }
  2. Annotations Addition:

    • The addition of annotations to the service object is a good enhancement for flexibility. Ensure that this change is documented and communicated to users who might rely on this schema, as it could affect how services are configured.
  3. General Validation:

    • After making these changes, validate the entire JSON schema to ensure there are no structural issues or conflicts with existing definitions. This can prevent runtime errors and ensure smooth integration.


Dependency Review

Click to read mergecats review!

No suggestions found

@shlomitubul
Copy link
Contributor Author

shlomitubul commented Nov 19, 2024

This looks quite reasonable, and likely is useful for other reasons as well (network sidecars, secret providers, etc). You might also try out the image from the multisource apps PR, as that shifts responsibility for manifest generation back to argocd's repo server. It might resolve your issue in more straightforward way (as I would imagine argocd has already been configured correctly).

Be aware that this also requires direct access to the backend repo service, generally located in the argocd namespace, called argocd-repo-server, on port 8081. If it's not configured that way, you can override it via KUBECHECKS_ARGOCD_REPOSITORY_ENDPOINT. I'd be interested in knowing your results, as this could provide more evidence that the extra complexity is worth the effort. Thanks!

image: ghcr.io/zapier/kubechecks:0.0.0-pr298

@djeebus, I reviewed the multi-source PR, and aside from all re-factoring, now we call new GetManifests instead of GetManifestsLocal that sends tarball to the argo repo server for diff check?

@shlomitubul
Copy link
Contributor Author

@djeebus can we merge this please ? we really like to start using this great tool ( :

@djeebus djeebus merged commit b8ff067 into zapier:main Nov 20, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants