Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: option to delete & recreate objects that already exist when restoring #469

Closed
gianrubio opened this issue May 1, 2018 · 43 comments
Closed
Labels
1.13-candidate issue/pr that should be considered to target v1.13 minor release Enhancement/User End-User Enhancement to Velero kind/requirement Needs Product Blocked needing input or feedback from Product Reviewed Q2 2021

Comments

@gianrubio
Copy link
Contributor

I set up ark 0.8.1 to make backups of my cluster, after that I was testing the restore just to make sure that ark restore will work. I got some warning and errors so I'm wondering if they are expected or I'm doing something wrong.

This is a warning, not sure why it's failing? I'd expect to ark replace this resource even if the resource already exist, maybe a ark flag to force restore could solve this issue.

kube-system:  not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.

This is an error:

error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set

Full ark restore output

Giancarlos-MBPro:.ssh grubio$ ark restore describe logging-multiple-hostnames-20180501104707
Name:         logging-multiple-hostnames-20180501104707
Namespace:    heptio-ark
Labels:       <none>
Annotations:  <none>

Backup:  logging-multiple-hostnames

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Restore PVs:  auto

Phase:  Completed

Validation errors:  <none>

Warnings:
  Ark:        <none>
  Cluster:  not restored: persistentvolumes "pvc-138f24f1-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13b0f8f2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13d14da2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-13f6562d-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37a6990b-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37c27b62-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-37c9b935-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
            not restored: persistentvolumes "pvc-6c54e367-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
  Namespaces:
    default:      not restored: services "kubernetes" already exists and is different from backed up version.
    ingress:      not restored: configmaps "intern-intern" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller-metrics" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller-stats" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-controller" already exists and is different from backed up version.
                  not restored: services "ingress-nginx-ingress-intern-default-backend" already exists and is different from backed up version.
                  not restored: services "ingress-oauth-proxy" already exists and is different from backed up version.
    kube-system:  not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.
                  not restored: configmaps "ingress-shim-controller" already exists and is different from backed up version.
                  not restored: configmaps "monitoring.v69" already exists and is different from backed up version.
                  not restored: endpoints "kube-controller-manager" already exists and is different from backed up version.
                  not restored: endpoints "kube-scheduler" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473820" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473880" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473940" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-cronjob-1524488340" already exists and is different from backed up version.
                  not restored: jobs.batch "kube-system-cert-manager-job" already exists and is different from backed up version.
                  not restored: services "heapster" already exists and is different from backed up version.
                  not restored: services "kube-dns" already exists and is different from backed up version.
                  not restored: services "kube-system-kubernetes-dashboard" already exists and is different from backed up version.
                  not restored: services "tiller-deploy" already exists and is different from backed up version.
    logging:      not restored: configmaps "intern-logging-intern-logging" already exists and is different from backed up version.
                  not restored: services "cerebro-logging-cluster" already exists and is different from backed up version.
                  not restored: services "elasticsearch-discovery-logging-cluster" already exists and is different from backed up version.
                  not restored: services "elasticsearch-logging-cluster" already exists and is different from backed up version.
                  not restored: services "es-data-svc-logging-cluster" already exists and is different from backed up version.
                  not restored: services "kibana-logging-cluster" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller-metrics" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller-stats" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-controller" already exists and is different from backed up version.
                  not restored: services "logging-nginx-ingressintern-default-backend" already exists and is different from backed up version.
    monitoring:   not restored: configmaps "monitoring-kube-prometheus" already exists and is different from backed up version.
                  not restored: endpoints "alertmanager-operated" already exists and is different from backed up version.
                  not restored: endpoints "prometheus-operated" already exists and is different from backed up version.
                  not restored: services "monitoring-prometheus-pushgateway" already exists and is different from backed up version.

Errors:
  Ark:        <none>
  Cluster:    <none>
  Namespaces:
    kube-system:  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "69f1831d34b8a772e16fe4b53dfde156": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "2f971a1dcd6eb045c364011a4cd3eb0b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "a78c3a37fa41e2979affd20e9b8e0111": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1e7be17cb58e298472eb0bcf5529d4ca": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "7b2a70d4cf5b688ab13ddbe564ef527e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "0e92292cb0f619d5a229297600d7bb97": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5963c325107b331ab635aad75b94927b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "bac2cc1636847764a0815d26720c8cd7": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "377aa0ca81598973093dac679d794bba": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "4b96cd34114ce182fb895b5851df1076": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "f97f3000e965824d1fbf2f5e271c5dcb": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5092b3704cad1cae1ba58baa1f89c044": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "41e604c2a05ff59d4ca71eae2650b77b": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "8ad729bc65359d65c67211a9c8cad910": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "92a7e3e865f9d8fefcc21e84377b4f40": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
                  error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
Giancarlos-MBPro:.ssh grubio$ 
@ncdc
Copy link
Contributor

ncdc commented May 1, 2018

Hi @gianrubio

kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.

This type of message is a warning and it indicates that there is an item with the same name that already exists in the cluster. Ark examined the backed up copy and compared it to the in-cluster copy, and there were differences, so Ark records a warning so you're aware that it wasn't able to restore the item.

error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set

This is #428

@gianrubio
Copy link
Contributor Author

This type of message is a warning and it indicates that there is an item with the same name that already exists in the cluster. Ark examined the backed up copy and compared it to the in-cluster copy, and there were differences, so Ark records a warning so you're aware that it wasn't able to restore the item.

Does it make sense to ask not restore the object even if it’s not the same? How does ark compare the object?

@ncdc
Copy link
Contributor

ncdc commented May 7, 2018

Does it make sense to ask not restore the object even if it’s not the same?

I'm not sure what you mean?

How does ark compare the object?

Ark clears out fields that would differ such as .metadata.uid and then checks for equality using reflect.DeepEqual().

@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

@gianrubio for the warnings such as kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version., do you believe the items are identical and that Ark is not comparing them correctly?

Is there anything else you need for this issue, or would it be ok to close it?

@gianrubio
Copy link
Contributor Author

@gianrubio for the warnings such as kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version., do you believe the items are identical and that Ark is not comparing them correctly?

The items are probably not equal but I'd expect ark to replace them.

@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

I think we'd need to provide a control for that behavior and let the user doing the restore decide if Ark should delete & recreate or no-op.

@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

Should we repurpose this issue as "RFE: option to delete & recreate objects that already exist when restoring"?

@gianrubio
Copy link
Contributor Author

Yes, that was my point, maybe a flag like --force could solve this behaviour, WDYT?

@gianrubio gianrubio changed the title ark is failing to restore objects from a backup RFE: option to delete & recreate objects that already exist when restoring May 10, 2018
@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

cc @jbeda

@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

I'm thinking maybe something like --conflict-strategy with options replace (delete what's in the cluster and create what's in the backup), preserve (keep what's in the cluster and record a warning as we're doing now). (All names TBD)

@gianrubio
Copy link
Contributor Author

gianrubio commented May 10, 2018

The proposal sounds good, I have only one thought. Deleting the object before applying will rescheduled all the object, doing it in big cluster can cause issues. I'd rather delete objects that have failed to apply the changes, big warning on deleting volumes and PVCs

@ncdc
Copy link
Contributor

ncdc commented May 10, 2018

Yes, the flow would be

  1. Try to create the object
  2. If it failed because the item of the same name already exists
    1. If the item in the backup and the item in the cluster are the same, no-op
    2. Otherwise, check the conflict strategy and proceed with delete/create or logging a warning

I also don't think we'd ever want to delete a PV or PVC. We have another issue open for cloning preexisting PVs into a cluster (#192). We'll need to make sure we special case things like PVs/PVCs here.

@rosskukulinski
Copy link
Contributor

User story:

As a cluster operator, I want to use Ark as a mechanism to keep two clusters in sync. This might be Prod A and Prod B, or alternatively every night mirror Production to Staging so that we have a fresh environment for testing/staging.

For stateless apps, this sounds like a healthy feature for us to add. I agree with Andy that we probably don't want to delete PV/PVC by default.

That said, if the use-case is mirroring Production to Staging, I don't want to keep around my old staging PV/PVCs. Perhaps we need another CLI flag for PV/PVC specifically? --conflict-strategy-volumes?

@rosskukulinski rosskukulinski added the Enhancement/User End-User Enhancement to Velero label Jun 14, 2018
@rosskukulinski rosskukulinski added this to the v1.0.0 milestone Jun 14, 2018
@rosskukulinski rosskukulinski modified the milestones: v1.0.0, v0.11.0 Oct 17, 2018
@rosskukulinski rosskukulinski added the Needs Product Blocked needing input or feedback from Product label Oct 17, 2018
@rosskukulinski
Copy link
Contributor

@heptio/ark-team I'd like to propose resolving this as part of v0.11.0. We should discuss during the v0.11.0 planning meeting what might have to get pushed back to let this in.

Adding Needs Product label.

@ncdc
Copy link
Contributor

ncdc commented Nov 28, 2018

@rosskukulinski can we talk about this soon?

@rosskukulinski
Copy link
Contributor

@ncdc sure! Maybe Tuesday?

@ncdc
Copy link
Contributor

ncdc commented Nov 28, 2018

Sounds good.

@skriss
Copy link
Contributor

skriss commented Jan 24, 2019

@rosskukulinski I know we've gone back and forth on this a number of times. Is this actually a priority to solve and something we need to do as part of v0.11?

@nrb
Copy link
Contributor

nrb commented Mar 2, 2020

@michmike What about other objects, besides pods?

@michmike
Copy link
Contributor

michmike commented Mar 2, 2020

@nrb yes, this should apply for all objects. my bad

@skriss
Copy link
Contributor

skriss commented Mar 2, 2020

Some complications to think through:

  • pods/other objects owned by controllers (they'll be recreated if we delete them in prep for a restore)
  • PVCs/PVs that are in use by a pod (deletes will be disallowed due to the PVC/PV in-use protection finalizer)

@nrb
Copy link
Contributor

nrb commented Mar 2, 2020

PVCs/PVs that are in use by a pod (deletes will be disallowed due to the PVC/PV in-use protection finalizer)

Really any finalizer will be an issue; we can look for finalizer labels and log them.

There's also cascading deletes - if we delete an object, it may cause many other objects that reference it to be deleted. Pods are an easy example, but Custom Resources make this more tricky, as we wouldn't be able to find all references to the current object unless we saw all objects in the cluster.

As I write this out, it seems like a case where 2 pass restores could help a lot. One pass where we see what needs to be restored and what might reference it, then the second pass to actually manipulate the cluster.

@michmike
Copy link
Contributor

i like the idea of a 2 pass restore @nrb (and the second pass can be best effort, while the first pass behaves just like restores do today)

@nrb nrb removed this from the v1.x milestone Dec 8, 2020
@eleanor-millman eleanor-millman added the Icebox We see the value, but it is not slated for the next couple releases. label May 3, 2021
@dsu-igeek dsu-igeek added Reviewed Q2 2021 and removed Icebox We see the value, but it is not slated for the next couple releases. labels May 3, 2021
@stale
Copy link

stale bot commented Jul 8, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the staled label Jul 8, 2021
@invidian
Copy link
Contributor

invidian commented Jul 8, 2021

Still important I guess?

@xenuser
Copy link

xenuser commented Mar 23, 2022

We stumbled across the same behavior today. We would expect that Velero offers an option to detect and replace changed items or to force overwrite objects, maybe based on their "kind".

@reasonerjt reasonerjt added kind/requirement Reviewed Q2 2021 Needs Product Blocked needing input or feedback from Product Enhancement/User End-User Enhancement to Velero and removed Enhancement/User End-User Enhancement to Velero Needs Product Blocked needing input or feedback from Product Reviewed Q2 2021 labels May 20, 2022
@kaovilai
Copy link
Contributor

@shubham-pampattiwar fyi you already closed this via this PR
#4842

@joyienjoy
Copy link

joyienjoy commented Apr 10, 2023

With respect to the recently released version of Velero v1.1.2, has anything being implemented that solved this issue ? or Given any alternate way to apply restore forcefully even if the file is present in the cluster.

@jglick
Copy link

jglick commented Apr 10, 2023

You can just

kubectl patch -n velero backupstoragelocation/default --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}'
function bsl_rw {
    kubectl patch -n velero backupstoragelocation/default --type merge --patch '{"spec":{"accessMode":"ReadWrite"}}'
}
trap bsl_rw EXIT
kubectl delete --ignore-not-found --wait ns …
velero restore create …

presuming you have taken care to back up everything of value in the namespace. Any PVs/PVCs will be recreated from backup volumes.

@pradeepkchaturvedi pradeepkchaturvedi added the 1.13-candidate issue/pr that should be considered to target v1.13 minor release label Aug 4, 2023
@reasonerjt
Copy link
Contributor

Let's track it via #6142

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1.13-candidate issue/pr that should be considered to target v1.13 minor release Enhancement/User End-User Enhancement to Velero kind/requirement Needs Product Blocked needing input or feedback from Product Reviewed Q2 2021
Projects
None yet
Development

No branches or pull requests