Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExternalIPs in services of type LoadBalancer using kube-vip should get their own daemonset to not run on masters #575

Open
robcxyz opened this issue Sep 3, 2024 · 0 comments

Comments

@robcxyz
Copy link

robcxyz commented Sep 3, 2024

Expected Behavior

Services of type load balancer should not have their vip running on masters. Instead there should probably be another daemonset running on workers for allocating these IPs.

Current Behavior

I am using kube-vip to advertise IPs for services of type load balancer and so I deployed another daemonset but the IP ended up getting picked up by the kube-vip-ds which manages the vip for the cp. This is because within the kube-vip daemonset there is this logic.

		# svc_enable -> From docs: Enables kube-vip to watch Services of type LoadBalancer
        - name: svc_enable
          value: "{{ 'true' if kube_vip_lb_ip_range is defined else 'false' }}"

Since kube-vip for the cp runs on masters, seems like this isn't the desirable behavior if you are trying to allocate vips for services of type load balancer. A better solution could be taking this logic out of the kube-vip meant for the cp and when kube_vip_lb_ip_range is given, another daemonset is deployed with a node selector for workers.

Steps to Reproduce

  1. Give kube_vip_lb_ip_range
  2. See IPs allocated on the masters

Context (variables)

Operating system: Ubuntu 22

Hardware: Bare metal

Variables Used

all.yml

cilium_iface: "eth0"
cilium_mode: "native"        # native when nodes on same subnet or using bgp, else set routed
cilium_tag: "v1.16.0"        # cilium version tag
cilium_hubble: true          # enable hubble observability relay and ui

#flannel_iface: ""
#calico_iface: ""
#calico_ebpf: ""
#calico_cidr: ""
#calico_tag: ""

kube_vip_tag_version: "v0.8.2"
kube_vip_lb_ip_range: "10.100.22.100-10.100.22.116"
# kube_vip_cloud_provider_tag_version: "main"

#metal_lb_speaker_tag_version: ""
#metal_lb_controller_tag_version: ""

#metal_lb_ip_range: ""

Possible Solution

The simplest solution could be just documenting this better in the group_vars/all.yml with a little warning like:

# Warning - IPs will be allocated on the kube-vip-ds running on the masters 
kube_vip_lb_ip_range: "10.100.22.100-10.100.22.116"

There isn't anything inherently wrong with running services on masters but just generally not advised right?

Also as mentioned, another solution could be just triggering the deployment of another daemonset on the workers or just removing the capability of running kube-vip to watch for services of type load balancer and just make the user run their own daemonset if this is what they are trying to do.

This wouldn't have been a problem for me if I had not been advised to not use BGP for cilium yet (my original plan) due to it being a little too early for this new feature (there are documented bugs and keeping IPs is pretty mission critical) so I decided to go down this route with kube-vip since I was also told metallb and cilium don't play nice with each other. Anyways, happy to submit PR especially if the agreed solution would be just better docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant