Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug/Feature request] IP addresses does not persist on KubeVirt VMIs after restart #500

Open
saffronjam opened this issue Aug 30, 2024 · 3 comments
Labels
question Further information is requested

Comments

@saffronjam
Copy link

saffronjam commented Aug 30, 2024

Hello! We've been trying out whereabouts for our VM management using KubeVirt and it has worked great so far.

However, we've noticed that restarts to KubeVirt VMI's seems to release the IP address, even though the VM is very much still there. We also know that we are not the only one asking for this behavior.

We are using kubemacpool to ensure VMs get their unique MAC address that persist, but this IP management seems to not work the same way.

So, we are not sure if this is a bug or a feature request, or just "help wanted", hence the loose style of this issue.
And we ask for some pointers in how we can ensure sticky IP address assignment to VMs from a pool.

System info

Whereabouts version: 0.8.0 (latest as of writing this)
Kubernetes version: 1.30.3
KubeVirt version: 1.3.1

Config

{
  "cniVersion": "0.3.1",
  "name": "host-network-bridge",
  "type": "bridge",
  "bridge": "br0",
  "ipam": {
    "type": "whereabouts",
    "range": "192.168.100.2-192.168.100.254/24",
    "gateway": "192.168.100.1",
    "routes": [
      {
        "dst": "0.0.0.0/0",
        "gw": "192.168.100.1"
      }
    ],
    "dns": {
      "nameservers": [
        "8.8.8.8",
        "8.8.4.4"
      ]
    }
  }
}

Reproduce

  1. Create 10 KubeVirt VMs
  2. Note IPs
  3. Restart all VMs
  4. See that VMIs does not have the same IP (unless coincidence)

Thanks!

@mlguerrero12
Copy link
Collaborator

Hi @saffronjam, could you please share the logs?

@mlguerrero12 mlguerrero12 added the question Further information is requested label Oct 1, 2024
@saffronjam
Copy link
Author

Hi,

We are using a two-node setup, here are the logs for the two nodes for the Whereabouts pod when I created 10 VMI's requesting IP's. It seems to be only verbose and debug logs.

Node 1:

Done configuring CNI.  Sleep=false
2024-10-01T12:49:26Z [debug] Filtering pods with filter key 'spec.nodeName' and filter value 'edge-168'
2024-10-01T12:49:26Z [verbose] pod controller created
2024-10-01T12:49:26Z [verbose] Starting informer factories ...
2024-10-01T12:49:26Z [verbose] Informer factories started
2024-10-01T12:49:26Z [verbose] starting network controller
2024-10-01T12:49:27Z [verbose] using expression: 30 4 * * *
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-6-7bqxd]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:23Z [verbose] deleted pod [default/virt-launcher-vm-10-nzsdf]
2024-10-01T12:50:23Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:23Z [verbose] result of garbage collecting pods: <nil>

Node 2 (a larger node causing more of the 10 VMs I created to start here):

Done configuring CNI.  Sleep=false
2024-10-01T12:49:51Z [debug] Filtering pods with filter key 'spec.nodeName' and filter value 'edge-180'
2024-10-01T12:49:51Z [verbose] pod controller created
2024-10-01T12:49:51Z [verbose] Starting informer factories ...
2024-10-01T12:49:51Z [verbose] Informer factories started
2024-10-01T12:49:51Z [verbose] starting network controller
2024-10-01T12:49:52Z [verbose] using expression: 30 4 * * *
2024-10-01T12:50:21Z [verbose] deleted pod [default/virt-launcher-vm-4-ckhvx]
2024-10-01T12:50:21Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:21Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:21Z [verbose] deleted pod [default/virt-launcher-vm-3-qfbh6]
2024-10-01T12:50:21Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:21Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-2-t5g97]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-8-lvszg]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-7-c8b7f]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-9-czs7b]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-1-7rbrq]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>
2024-10-01T12:50:22Z [verbose] deleted pod [default/virt-launcher-vm-5-m5xdr]
2024-10-01T12:50:22Z [verbose] skipped net-attach-def for default network
2024-10-01T12:50:22Z [verbose] result of garbage collecting pods: <nil>

Scripts to reproduce

The following scripts uses this VM template:

vm-templ.yml

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: $NAME
spec:
  runStrategy: Always
  template:
    spec:
      domain:
        devices:
          interfaces:
          - bridge: {}
            name: host-network-bridge
        memory:
          guest: 512Mi
        resources: {}
      terminationGracePeriodSeconds: 180
      networks:
      - multus:
          default: true
          networkName: kube-system/host-network-bridge
        name: host-network-bridge

And the NAD config from above:

{
  "cniVersion": "0.3.1",
  "name": "host-network-bridge",
  "type": "bridge",
  "bridge": "br0",
  "ipam": {
    "type": "whereabouts",
    "range": "192.168.100.2-192.168.100.254/24",
    "gateway": "192.168.100.1",
    "routes": [
      {
        "dst": "0.0.0.0/0",
        "gw": "192.168.100.1"
      }
    ],
    "dns": {
      "nameservers": [
        "8.8.8.8",
        "8.8.4.4"
      ]
    }
  }
}

Here are a few commands I used:

  1. Create VMs
for i in {1..10}; do NAME="vm-$i" envsubst <vm-templ.yml | kubectl apply -f -; done
  1. Fetch IP before restart (outputs to file in the format: <VM name> <IP> <Host>)
kubectl get vmis -o json | jq -r '.items[] | "\(.metadata.name)\t\(.status.interfaces[0].ipAddress)\t\(.status.nodeName)"' | sort -t '-' -k2,2n > ip-$(date +%Y%m%d-%H%M%S).txt
  1. Restart all VMs (delete its VMI)
kubectl delete vmi --all -n default
  1. Fetch IP again (same as before)
kubectl get vmis -o json | jq -r '.items[] | "\(.metadata.name)\t\(.status.interfaces[0].ipAddress)\t\(.status.nodeName)"' | sort -t '-' -k2,2n > ip-$(date +%Y%m%d-%H%M%S).txt
  1. Delete VMs
kubectl delete vm --all -n default

This should create two files where the IPs are different, meaning it is not sticky, which we would really like to have! 😄

Lmk if you need any other logs or information regarding our setup.

@mlguerrero12
Copy link
Collaborator

@maiqueb, any thoughts here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants