Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backups not syncing to new cluster for migration if owner reference does not exist #7031

Closed
deefdragon opened this issue Oct 28, 2023 · 3 comments · Fixed by #7032
Closed
Assignees

Comments

@deefdragon
Copy link

What steps did you take and what happened:
I created a kubernetes cluster on Digital ocean to verify my migration and backup plans, and when attempting to validate my migration plan, the backups were recognized by velero and logged as imported, but do not show up in the cluster

What did you expect to happen:

velero backup describe to show the information for the backup that was logged as successfully synced to the cluster.

The following information will help us better understand what's going on:

If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help

An error occurred: backups.velero.io "" not found

Anything else you would like to add:
The logs specifically show all of the pre-existing backups as successfully migrated (this log line), but none of the backups show up in any of the methods for checking for them. lens, kubectl, velero describe, etc.

Environment:

  • Velero version: 1.12
  • Velero features (use velero client config get features): NOT SET
  • Kubernetes version (use kubectl version): 1.23 on old server, 1.28 on new
  • Kubernetes installer & version: kubeadm install to digital ocean
  • Cloud provider or hardware configuration: moving from local to digital ocean
  • OS (e.g. from /etc/os-release): unknown

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@deefdragon
Copy link
Author

I was able to duplicate this with a local cluster & think I have determined the root cause.

I decided to attempt to add the backup manually to see if I could get some better logs for the exact reason. After some digging, I was able to get the backup to load in via kubectl apply if I deleted the owner reference. The backups I use are created from a schedule, and, as the schedule does not exist in the destination cluster, it appears that it would not create.

I think an extra step to remove the owner reference needs to be added to the code here to check for the owner references all exist, and if they do not, remove them.

@deefdragon deefdragon changed the title Backups not syncing to new cluster for migration Backups not syncing to new cluster for migration if owner reference does not exist Oct 30, 2023
@deefdragon
Copy link
Author

Created a PR to address this.

@ywk253100
Copy link
Contributor

Dup of #6857, let's use #6857 to track and close this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants