Replies: 3 comments 2 replies
-
Holy smoke! 🤯 Thanks for the advanced investigation you put in there, @florent1s! Wondering what could be the next steps for this: maybe having it in a documentation file of the Kamaji website? |
Beta Was this translation helpful? Give feedback.
-
Thank @florent1s! I agree @prometherion that could be integrated with the documentation, it's very clear. I'm wondering if k0sctl config could be leveraged to automate further some parts. I know this can be achieved already with k0smotron and it's integrated with k0s. May I ask you which points @florent1s influenced your architectural decision to use Kamaji for managing control planes? |
Beta Was this translation helpful? Give feedback.
-
The challenge and solution presented here are not exclusive to K0s and therefore also apply to standard K8s workers Kamaji CP with K0s workers part 2: using ingress and FQDN to differenciate tenantsHello! Here I am again with the follow-up to my previous post. It took me a while to write this because I had other priorities (internship report ;) ). We finally have K0s workers connecting to a Kamaji control plane over an ingress (or rather multiple ingress') Goals, challenges and experimentationIn the standard setup I described in my previous post you had to define an IP from the host cluster A possible solution could be to have an ingress setup in front of the Kamaji cluster that would redirect requests to the right This sounds quite simple so far right ? So where is the catch ? Our approach to resolve this was to deploy a proxy as a daemonset on the tenant workers which would listen on the destination IP The problem being that the packets sent from the default service and redirected by the proxy are already using HTTPS, and you can't The proxy I used for my experimentations is HAProxy and the reason why it was so difficult to solve this issue was that I didn't understand The working setupThe working setup we ended up with works as follows:
On top of that we also have 2 other ingresses (good old Nginx but could be done with Gateways for a more homogeneous setup), one for the communication between Konnectivity servers and agents and another one for the CLI tools, CNI and kubelet. Those simply forward all matching trafic to their corresponding server in TCP mode (TLS-passthrough). Step by step configurationI will not share the entire configuration here as this post would be way too long but if someone is interested I could write a step by step guide with all relevant configurations just like for the standard setup in my previous post. What's nextThe next step on my side for this will be to make helm charts to deploy such a setup more easily before integrating those in our GitOps pipeline. |
Beta Was this translation helpful? Give feedback.
-
Hello there! I wanted to share my findings from the last months of working with Kubernetes and Kamaji, specifically regarding the combination of a Kamaji control plane with K0s workers.
TL;DR: It is possible and doesn't actually require that much tinkering (it took a lot of experimentation to get there however ;) ).
I don't know if some people tried this already as I couldn't find much information (if any) about this online so for those interested here are the steps to get this working, feel free to share any questions, thoughts, ideas or sugestions this inspires you. Also thanks @prometherion for your help on the issue I posted, the insight you shared proved quite helpful during my experimentations.
What you need
For reference, here are the versions I used:
Configuration files
You also need a few extra files to configure the tenant cluster and its control plane:
tenant control plane definition file
(and namespace definition if needed)
Remember to set
spec.networkProfile.address
to a valid IPYou also might want to adjust
spec.kubernetes.version
depending on what version your k0s is running (I don't know what impact it could have to mismatch those versions)Notes:
spec.networkprofile
andspec.addons.konnectivity.server
need to be in the nodeport range (30000-32767). Don't make the same mistake I did ;)spec.networkprofile
can be the IP of a worker node of the host cluster (what I used for testing, not robust but quick and easy), a virtualIP regouping those worker nodes or the IP of a loadbalancer which points to those nodes (ingress' IP is also an option, I am currently working on such a setup).tenant-00.yaml
default worker configuration configmap
Copied from a standard k0s cluster, only changed
data.apiServerAddresses
to match my configuration.configmap-k0s-worker-default.yaml
Role and rolebinding
To allow the tenant workers to access the previously defined configmap
rb-configmap-access.yaml
How to install
Here is the script I wrote to create a basic TCP, apply the aforementioned configuration files and create the token for the k0s workers to join the cluster:
Remember to set
SRVR_ADDR
to a valid IPsetup-tenant.sh
Join the worker
k0s-token
file to the workersk0s install worker --token-file k0s-token
to start the k0s worker and make it join the clusterGoing further
I am still working on assembling a similar setup but with an ingress positionned in front of the host cluster and make all communications go through it. I found ways to make it work but I'm trying to eliminate some of the complexity, I will post an update once that's done (or when I'm stuck and don't know what to try anymore ;) ). I also plan on running the CNCF conformance tests against this setup to validate the cluster's behavior.
I also have a question reguarding the configuration options of the CoreDNS addon:
Turns out there is a recent discussion about this very thing: Enhancing CoreDNS configuration customisation #475
Beta Was this translation helpful? Give feedback.
All reactions