-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: option to delete & recreate objects that already exist when restoring #469
Comments
Hi @gianrubio
This type of message is a warning and it indicates that there is an item with the same name that already exists in the cluster. Ark examined the backed up copy and compared it to the in-cluster copy, and there were differences, so Ark records a warning so you're aware that it wasn't able to restore the item.
This is #428 |
Does it make sense to ask not restore the object even if it’s not the same? How does ark compare the object? |
I'm not sure what you mean?
Ark clears out fields that would differ such as |
@gianrubio for the warnings such as Is there anything else you need for this issue, or would it be ok to close it? |
The items are probably not equal but I'd expect ark to replace them. |
I think we'd need to provide a control for that behavior and let the user doing the restore decide if Ark should delete & recreate or no-op. |
Should we repurpose this issue as "RFE: option to delete & recreate objects that already exist when restoring"? |
Yes, that was my point, maybe a flag like |
cc @jbeda |
I'm thinking maybe something like |
The proposal sounds good, I have only one thought. Deleting the object before applying will rescheduled all the object, doing it in big cluster can cause issues. I'd rather delete objects that have failed to apply the changes, big warning on deleting volumes and PVCs |
Yes, the flow would be
I also don't think we'd ever want to delete a PV or PVC. We have another issue open for cloning preexisting PVs into a cluster (#192). We'll need to make sure we special case things like PVs/PVCs here. |
User story:
For stateless apps, this sounds like a healthy feature for us to add. I agree with Andy that we probably don't want to delete PV/PVC by default. That said, if the use-case is mirroring Production to Staging, I don't want to keep around my old staging PV/PVCs. Perhaps we need another CLI flag for PV/PVC specifically? |
@heptio/ark-team I'd like to propose resolving this as part of v0.11.0. We should discuss during the v0.11.0 planning meeting what might have to get pushed back to let this in. Adding Needs Product label. |
@rosskukulinski can we talk about this soon? |
@ncdc sure! Maybe Tuesday? |
Sounds good. |
@rosskukulinski I know we've gone back and forth on this a number of times. Is this actually a priority to solve and something we need to do as part of v0.11? |
@michmike What about other objects, besides pods? |
@nrb yes, this should apply for all objects. my bad |
Some complications to think through:
|
Really any finalizer will be an issue; we can look for finalizer labels and log them. There's also cascading deletes - if we delete an object, it may cause many other objects that reference it to be deleted. Pods are an easy example, but Custom Resources make this more tricky, as we wouldn't be able to find all references to the current object unless we saw all objects in the cluster. As I write this out, it seems like a case where 2 pass restores could help a lot. One pass where we see what needs to be restored and what might reference it, then the second pass to actually manipulate the cluster. |
i like the idea of a 2 pass restore @nrb (and the second pass can be best effort, while the first pass behaves just like restores do today) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Still important I guess? |
We stumbled across the same behavior today. We would expect that Velero offers an option to detect and replace changed items or to force overwrite objects, maybe based on their "kind". |
@shubham-pampattiwar fyi you already closed this via this PR |
With respect to the recently released version of Velero v1.1.2, has anything being implemented that solved this issue ? or Given any alternate way to apply restore forcefully even if the file is present in the cluster. |
You can just kubectl patch -n velero backupstoragelocation/default --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
function bsl_rw {
kubectl patch -n velero backupstoragelocation/default --type merge --patch '{"spec":{"accessMode":"ReadWrite"}}'
}
trap bsl_rw EXIT
kubectl delete --ignore-not-found --wait ns …
velero restore create … presuming you have taken care to back up everything of value in the namespace. Any PVs/PVCs will be recreated from backup volumes. |
Let's track it via #6142 |
I set up ark 0.8.1 to make backups of my cluster, after that I was testing the restore just to make sure that ark restore will work. I got some warning and errors so I'm wondering if they are expected or I'm doing something wrong.
This is a warning, not sure why it's failing? I'd expect to ark replace this resource even if the resource already exist, maybe a ark flag to force restore could solve this issue.
This is an error:
Full ark restore output
The text was updated successfully, but these errors were encountered: