Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

970: Taint nodes even if reboot is currently blocked #971

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 12 additions & 4 deletions cmd/kured/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -761,16 +761,24 @@ func rebootAsRequired(nodeID string, booter reboot.Reboot, sentinelCommand []str
blockCheckers = append(blockCheckers, KubernetesBlockingChecker{client: client, nodename: nodeID, filter: podSelectors})
}

rebootBlocked := rebootBlocked(blockCheckers...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder is the easiest and most correct way to improve this is to move right after L32 above, immediately after we determine that the node will need to be deleted.

As far as I can see there isn't any harm in always adding this "prefer no schedule" taint. Right now we're only doing it if we aren't able to rapidly progress to the drain operation, but why not do it no matter what?

This would solve for your current problem as well.

Thoughts?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me, I'll make the change later today.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @jackfrancis , which line is it you're referring to? I don't think L32 is correct.

I've been able to successfully run with this PR in prod for the last few weeks now and it definitely seems to have lowered the average time for a node to be blocked. I'm definitely open to adding the taint all of the time, regardless of whether we're in an acceptable reboot window or not.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jackfrancis do you truly believe that adding the prefer no schedule taint can be done in all cases?
Wouldn't it make it a bit moot to have all nodes with prefernoschedule after a security update (for which all nodes will get the taint)? When kubernetes will schedule (all the nodes having the prefernoschedule) there will be no distinction anymore... Am I missing something?


var rebootRequiredBlockCondition string
if rebootBlocked(blockCheckers...) {
if rebootBlocked {
rebootRequiredBlockCondition = ", but blocked at this time"
continue
}

log.Infof("Reboot required%s", rebootRequiredBlockCondition)

if !holding(lock, &nodeMeta, concurrency > 1) && !acquire(lock, &nodeMeta, TTL, concurrency) {
// Prefer to not schedule pods onto this node to avoid draing the same pod multiple times.
if rebootBlocked {
// Prefer to not schedule pods onto this node to avoid draining the same pod multiple times.
preferNoScheduleTaint.Enable()
// We've logged that the reboot is needed, but curently blocked, and have tainted the node.
continue
}

if !holding(lock, &nodeMeta, concurrency > 1) && !acquire(lock, &nodeMeta, TTL, concurrency) {
// If we can't acquire the lock, poll again
continue
}

Expand Down