-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor method of applying K8s resource files #27
Comments
Why does it need to take a full directory? Can it take a list of files (comma separated?) instead? |
@msuterski I took a few days to think about that and here's what I have to say about that:
templates:
- deploy/app.yml
- deploy/redis.yml
- deploy/ingress.yml
- somedir/xyz.yml
output: drone-gke-output/ vs template: deploy/
output: drone-gke-output/ |
Something else I failed to mention in the original RFC comment is that With the changes in this RFC, we can still default to |
I thought that maybe having them as a list would not necessitate for having an output as a directory (and not leak secrets), but I guess those are unrelated. Maybe I would ask a different question, then, why does output need to be a directory and not a single file (that would not include |
Good question! I want the output to match the input so that if the files are uploaded/reviewed/verbosely printed, they match the organization of the original input manifest templates. |
Re 1 and 2: Is "same thing pattern" some pattern I should know about? If not this seems like an awkward sentence. Also, does the same pattern really need to be followed for secrets? Could that be left as one file to make it easier to filter out secret values from verbose output? Code that renders templates is aware of
This may even be a good idea in general to prevent developers from accidentally printing secrets in their build logs. What about specifying |
It would be great to come back to this idea/requirement. Our deployments get more and more complex and a single yaml file might become difficult to read and browse through. |
As more complex K8s workloads are being created, teams would like to separate out their K8s resource manifests to be more maintainable.
cc @yunzhu-li.
Idea
Currently
template:
points to a single file for fillingvars:
variables.The suggestion is to allow
template:
to point to a directory that could containing 0-n K8s manifests for fillingvars:
variables.The resulting filled templates would be in some
output:
directory (default toworkspace.Path + /drone-gke-output/*.yml
).Then
kubectl apply -f
would be performed on the entireoutput:
directory.Implications
vars:
*,secrets:
, andsecrets_base64
variables), but also output to the sameoutput:
directory.The debug of these files will not be accessible via the
verbose:
parameter as it could/would have difficulty filtering out the "rendered".kube.sec.yml
file(s) in theoutput:
directory. We want to avoid leaking secrets.A solution (work around) for 2. is that since all files are writen to the
output:
directory, those files can be uploaded to GCS or S3 for viewing/redeployment/replication at a later time. This is detailed in Refactor where templated manifest files are written to #29.An
output:
directory will also enable other plugins to speak Kubernetes. One use case is for a vault-k8s plugin to write to theoutput:
folder a Vault specific K8s resource for drone-gke tokubectl apply -f dir/
.Following this issue Refactor method of creating namespaces #28, this would also allow the same pattern to be followed for creating and deploying into namespaces with labels and names that are templated with
vars:
variables. However controlling that the Namespace is created first (before other resources) is still TBD.For those that want to wait until the rollout of some resource to be complete using
kubectl rollout status
(wait until deploy completes #26), that feature can be implemented with the-f and -R
flags to manage all resources in theoutput:
directory, versus specifying the exact resource. Specifics of that feature should be continued in that issue.Releasing
There may be a way to implement these changes without break changes (transparently for plugin consumers).
If not, we may have to 1) create a new tag for the Docker container, 2) update the existing container to add a very clear
DEPRECATION
message, and 3) announce a limited support period.Next steps
This is an RFC, soliciting comments and implications.
The text was updated successfully, but these errors were encountered: