-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Silence "deletion in progress" errors when deleting VRGs #1689
base: main
Are you sure you want to change the base?
Conversation
efa29e0
to
a684fe8
Compare
Bogus errors in delete flowWe have 18 errors - 15 errors for the secondary vrg: 2024-12-01T19:37:37.543Z ERROR controller/controller.go:316 Reconciler error {"controller": "drplacementcontrol", "controllerGroup": "ramendr.openshift.io", "controllerKind": "DRPlacementControl", "DRPlacementControl": {"name":"subscr-deploy-rbd-busybox","namespace":"subscr-deploy-rbd-busybox"}, "namespace": "subscr-deploy-rbd-busybox", "name": "subscr-deploy-rbd-busybox", "reconcileID": "3041f0da-e705-450c-bdd1-e159ffcecef7", "error": "secondary VRG manifestwork deletion in progress"} And 3 errors for the primary vrg: 2024-12-01T19:38:32.151Z ERROR controller/controller.go:316 Reconciler error {"controller": "drplacementcontrol", "controllerGroup": "ramendr.openshift.io", "controllerKind": "DRPlacementControl", "DRPlacementControl": {"name":"subscr-deploy-rbd-busybox","namespace":"subscr-deploy-rbd-busybox"}, "namespace": "subscr-deploy-rbd-busybox", "name": "subscr-deploy-rbd-busybox", "reconcileID": "0f352732-07e2-40d5-bd3f-a9a97181d94a", "error": "primary VRG manifestwork deletion in progress"} Examples errors for single workload during e2e flow:
|
82b60e2
to
ffc3b3b
Compare
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
ffc3b3b
to
4bfc77f
Compare
This comment was marked as outdated.
This comment was marked as outdated.
Example deleting drpc progress logs
|
4bfc77f
to
9487e4e
Compare
// DeleteInProgress error is returned during DRPC deletion when we need to wait and try later. | ||
type deleteInProgress struct{ string } | ||
|
||
func (e deleteInProgress) Error() string { return e.string } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type? Why not simply use errors.New()
similar to these ? One might argue it’s for errors.Is()
support, but is it necessary to check for this specific error type at line 189? Instead, consider returning a custom error directly. The reconciler will eventually retry, and the error will still be logged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type? Why not simply use
errors.New()
similar to these ?
Using a sentinel error works when there one special error like io.EOF. But we have multiple reasons to requeue the request during delete:
- adopting a vrg
- waiting until secondary is deleted
- waiting until primary is deleted
- vrg count > 0 (maybe not possible now)
Using a a type matching any error of the same type, we can return multiple errors have the same handling.
We already use this in
func RequestProcessingErrorCreate(s string) RequestProcessingError { return RequestProcessingError{s} } |
I started with a local error in this package, but this can move to a shared package and used in all controllers.
One might argue it’s for
errors.Is()
support, but is it necessary to check for this specific error type at line 189?
Yes, see comment bellow.
Instead, consider returning a custom error directly. The reconciler will eventually retry, and the error will still be logged.
This is what the first commit is doing, using fmt.Errorf() - the result is having 18 errors with stacktrace for every disable dr. This is huge amount of noise and it is incorrect to log a normal condition as an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the errors generated by the first commit - each one has a 10 lines stacktrace that I did not copy:
#1689 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is huge amount of noise and it is incorrect to log a normal condition as an error
Since deleteInProgress
implements the Error
interface, it is inherently treated as an error object. This creates a bias in how it is perceived. While it may not represent an error condition for the reconcile server, within the context of Ramen, it is treated as one. Therefore, the classification as an error is intentional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using an error is just a limit of the current code, that have single error return value for the entire finalizeDRPC code path. We can change this but this is a big change that will be hard to backport, and I don't think it will be better to return multiple arguments like we have in other places. Using specific error type for the common case of "not ready yet" seems better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach works and looks elegant, but why introduce the complexity of defining a new type?
Using existing type now.
return ctrl.Result{Requeue: true}, nil | ||
} | ||
|
||
// Unexpected error. | ||
return ctrl.Result{}, err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean just let this be returned and remove the block starting from line 189 to 195 inclusive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The result will be ERROR log with a stacktrace - we see 18 of these for every delete operation. Since this is an expected condition, it is wrong to treat this as an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is an expected condition, it is wrong to treat this as an error.
Same as my comment above
9487e4e
to
4ffd7ee
Compare
4ffd7ee
to
299ec4f
Compare
299ec4f
to
e66f588
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the change.
|
||
// IsOperationInProgress returns true if err or error wrapped by it is an OperationInProgress error. | ||
func IsOperationInProgress(err error) bool { | ||
return errors.Is(err, OperationInProgress("")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question though, is this call going to propagate back to line 15? I doubt it. errors.Is(...) is a free function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, see errors.Is() implementation:
https://cs.opensource.google/go/go/+/refs/tags/go1.23.3:src/errors/wrap.go;l=58
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it does here:
if x, ok := err.(interface{ Is(error) bool }); ok && x.Is(target) {
return true
}
All good
} | ||
|
||
// Is this an expected condition? | ||
if rmnutil.IsOperationInProgress(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of passing back errors, which seems to be of limited use, and defining an error internally that needs to be honored in other error cases and also existing cases (which adds to code awareness across the functions when returning errors), I would instead just not return error and log it for troubleshooting use.
That way we are not defining an error that other parts of the code needs to return. Current code requests a reconcile using a requeue or an error, which for us is the same. So this change would impact future work.
When starting to write the code initially there was some confusion in this regard, and errors were returned. I suggest we do not return errors, but request a requeue via the result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current solution is a local change in the in some functions inside the flow, and a local change in Reconcile. This should be easy to backport (and we must backport it to 4.18 since this is a regression added recently).
If we don't use the error, we need to change the all the function signatures in delete flow. We can return requeue, error
, but this does not allow code to specify a RequeAfter, and even if we don't have code that needs it right now in this flow, we may have code that needs it in other flows or in this flow in the future, so this signature is not future proof. So looks like the new signature should be cntl.Result, error
.
But this does not match other controllers, and we probably want a consistent way to return results from controller functions. Do you suggest to refactor entire ramen to return cntl.Result, error
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, searching for (ctrl.Result, error) show that only few functions use this return value.
Results (without Reconcile functions):
- (r DRClusterReconciler) processCreateOrUpdate
- (r DRClusterReconciler) processDeletion
- (r *DRClusterConfigReconciler) processDeletion
- (r *DRClusterConfigReconciler) processCreateOrUpdate
- (r *DRPlacementControlReconciler) reconcileDRPCInstance
- (r *DRPolicyReconciler) reconcile
Using this signature for deletion flow seems common.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a well-known and long-standing debate in software design. It's similar to classic discussions around handling conditions like a "file-not-found.", whether you throw an exception or not... Is it an error/exception or simply an expected state? Similarly, in the context of a reconciler, when attempting to fetch a resource, receiving a NotFound
isn't necessarily a failure but rather an expected condition. But it is still treated as an error.
In our case, the processDeletion
function returns an error. If it doesn't complete fully, there's clearly a reason for it. This reason holds valuable context that can help the caller logic make informed decisions. Simply logging it without passing it back discards this context.
By returning the error, we make the caller to decide whether to log, requeue, or take any other corrective action. I also believe that Result{}
should not be propagated from function to function. Only the reconciler function should fill-in that structure with the information returned from the helper functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am stating simply that we update result to requeue as true
on errors and not propogate the Result to other functions. The reconciler is just a loop with an end to a call requesting a requeue or not, the error does not hold much value.
If we add reasons for errors and start making smart decisions we will need to use this pattern consistently, which is a burden, IMHO. If that will not be the case then in this narrow use-case we can do what is done here, but I worry that the pattern will be enforced in the future for no particular gain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am stating simply that we update result to requeue as
true
on errors and not propogate the Result to other functions.
So you suggest to change functions in processDeletion flow to behave like this?
func deletteExample(...) bool {
// Expected to succeed during deletion
if err := trySomething(...); err != nil {
log.Error(err, "failed to ...")
return false
}
// Expected to return false for a while during deletion
if !readyForNextStep(...) {
return false
}
// Completed
return true
}
So we log an INFO message in ensureDeletedVRGs if we the VRG was not deleted yet, and return false.
If we add reasons for errors and start making smart decisions we will need to use this pattern consistently.
This pattern of trying to do the next step in the way to the desired state is pretty common in ramen. It could be very useful if we use this pattern everywhere. It will make the code much more clear, and avoid all the bogus errors that we have today. This change make it easy to use this pattern everywhere gradually, without changing lot of code.
If that will not be the case then in this narrow use-case we can do what is done here, but I worry that the pattern will be enforced in the future for no particular gain.
How about using this pattern only in this controller for now? We need a way to fix the regression caused by adding the vrg on the secondary cluster. This should be backported to 4.18 soon before it breaks QE tests in the same way it breaks our tests.
I can also split the fix (the first commit) so we can merge it now. It will cause a lot of noise in the logs during deletion, but we can fix this in the next step when we converge on what is the right way to handle "not ready" state during reconcile.
e66f588
to
2d6d2cc
Compare
@BenamarMk @ShyamsundarR I splited the fix to #1709, so can fix the failing builds quickly, and have more time to think about the best way to handle operation in progress. |
This error is used to signal that an operation is in progress, and the caller should log the error in normal log level and try again later. This is a common concept for reconciling, and we can sue the same error in other packages. Changes: - Rename the error to OperationInProgress - Simplify to a string instead of embedding a string - update the tests to reflect the common usage with errors.Is(). Signed-off-by: Nir Soffer <[email protected]>
We want to use it more in ramen, to remove bugus errors for logs. To encourage use this this pattern and make it easier to use correctly, add util.IsOperationInProgress() helper. Improve the test to use the new IsOperationInProgress() and test also wrapped OperationInProgress error. Signed-off-by: Nir Soffer <[email protected]>
When deleting the DRPC we may need to adopt the VRG, delete the secondary VRG, wait until the secondary VRG is deleted, delete the primary VRG, and wait until the primary VRG is deleted. This takes 60-90 seconds and many reconciles (18 seen in e2e test), and creates huge amount of noise in the log. Suppress the noise using util.OperationInProgress error. When the reconcile is successful but it is still in progress, we return a util.OperationInProgress error describing the current progression. The top level error handler logs an INFO message and requeue the request. With this change we will see multiple logs for the secondary VRG: INFO Deleting DRPC in progress {"reason", "secondary VRG deletion in progress"} ... And finally more logs for the primary VRG: INFO Deleting DRPC in progress {"reason", "primary VRG deletion in progress"} ... Notes: - We logged errors during finalizeDRPC twice; once as INFO log, and once as ERROR with a stacktrace when we return error from the reconcile. Remove the duplicate INFO log. - The linter is not happy about the new nested if. We can avoid this by extracting a helper to handle finalize errors, but I want to keep the change minimal for easy backport. We can improve this later upstream. Signed-off-by: Nir Soffer <[email protected]>
2d6d2cc
to
ed17713
Compare
Fixing vrg delete order added log of errors about "VRG deletion in progress". This change convert the errors (with stacktrace) to INFO message.
During DRPC delete we expect to see these logs one or more times: