Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter runtime logs showing tons of record: incompatible with nodepool , daemonset overhead #1904

Open
wangsic opened this issue Jan 7, 2025 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@wangsic
Copy link

wangsic commented Jan 7, 2025

Description

Version:
Karpenter Version: v1.0.6
Kubernetes Version: v1.31

Context:

There are some pods (not daemonset) weren't scheduled to the dedicated nodes successfully according to karpenter logs as below:

"incompatible with nodepool \"gpu\", daemonset overhead={\"cpu\":\"605m\",\"memory\":\"1288Mi\",\"pods\":\"12\"}, did not tolerate nvidia.com/gpu=1:NoSchedule;

incompatible with nodepool \"app\", daemonset overhead={\"cpu\":\"605m\",\"memory\":\"1288Mi\",\"pods\":\"12\"}, incompatible requirements, label \"eks.amazonaws.com/nodegroup\" does not have known values"

As per design we don't want these pods are scheduled to the dedicated nodes by using node taints, pod toleration/nodeSelector, whereas it did actually, the good thing is it's failed.

As of now It don't impact our business, everything looks good due to above failure, but those tons of error message in karpenter logs. We'd like to know how to avoid it happen then clear those error messages.

Other Information:

We have two nodepools, these are gpu and app.

gpu has its own taints as below

key = "nvidia.com/gpu"
value = "1"
effect = "NoSchedule"

app nodepool doesn't have taints, but it has startup_taints as below

key = "node.cilium.io/agent-not-ready"
value = "true"
effect = "NoExecute"

We also have two node groups managed by AWS ASG, one is for karpenter, the other is for infrastructure addons.

karpenter node group is a dedicated node, only accept karpenter pods.

Infrastructure node group only accept infrastructure addons, which has taints and label

Taint:

key = "node-group"
value = "infra"
effect = "NO_SCHEDULE"

node label:

eks.amazonaws.com/nodegroup=non-prod-uw2-blue-infra-nodegroup

all infra addons should be running at Infrastructure node group


The one of infra addon, Istio pod should be scheduled to infra node group instead of gpu / app nodepool, but it did actually according to karpenter logs, Istio has toleration and nodeselector as below:

tolerations:

\- key: "node-group"

operator: "Exists"

nodeSelector:

eks.amazonaws.com/nodegroup: ${CLUSTER_NAME}-infra-nodegroup


We spent some time to investigate it, but no luck, still can't find root cause, so have to raise issue here.

We'd like to know how to avoid it happen then clear those error messages if possible.

@wangsic wangsic added the kind/bug Categorizes issue or PR as related to a bug. label Jan 7, 2025
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 7, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants