Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to disable, or gracefully handling failure at runtime, of reading/creating repository secret on server init #6540

Open
fryz opened this issue Jul 24, 2023 · 5 comments

Comments

@fryz
Copy link

fryz commented Jul 24, 2023

Describe the problem/challenge you have

We are currently trying to deploy Velero into an environment that is highly restricted, and one of the requirements for this environment is that reading/writing secrets is not permitted.

Currently, this causes the Velero server to fail to initialize and lead to the pod entering into CrashLoopBackOff because Velero tries to read the Repository Secret, and then if the secret is not found, to create that secret.

The callstack is:

  • run() - server.go
  • initRepoManager() - server.go
  • EnsureCommonRepositoryKey() - keys.go

In EnsureCommonRepositoryKey, Velero tries to read from the velero-repo-credentials secret, and if it fails (eg: because the kubernetes client doesn't have permissions to read secrets), then it returns an error which passes up to the main server run method, causing the server to crash. (https://github.com/vmware-tanzu/velero/blob/main/pkg/repository/keys/keys.go#L40-L46)

For our specific use-case of Velero, we are not using Repositories for backups - we are only using Velero to back up K8s manifest and take Volume Snapshots using the AWS Plugin (which uses IRSA annotations to bind IAM roles via ServiceAccounts).

Describe the solution you'd like

We'd like the ability to disable reading from this secret during server initialization, or move the reading/creating the secret to runtime and handle failure gracefully, as our use-case doesn't require this secret.

Anything else you would like to add:

Environment:

  • Velero version (use velero version): We are using 1.9 but this is present in 1.11 as well
  • Kubernetes version (use kubectl version): 1.25
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "The project would be better with this feature added"
  • 👎 for "This feature will not enhance the project in a meaningful way"
@fryz
Copy link
Author

fryz commented Jul 24, 2023

We've been discussing this in the #velero-devs slack channel here:
https://kubernetes.slack.com/archives/C021GPR1L3S/p1690228855681089

@Lyndon-Li
Copy link
Contributor

The requirement is clear.
However, after some internal discussion, we are not sure this generally restricting access to all the secrets is a best practice for kubernetes resource/permission management.

Therefore, we will wait for more inputs/use cases and decide on changes for this requirements.

@fryz
Copy link
Author

fryz commented Jul 31, 2023

Thanks @Lyndon-Li.

Just so that I understand, what do you mean by:

we are not sure this generally restricting access to all the secrets is a best practice for kubernetes resource/permission management

I'm not suggesting that Velero be updated in such a way that accommodates using different secret stores. Instead, we'd be fine with just having the ability to ignore the reading/creating the repository secret during server initialization.

Specifically, something like the below psuedo-code would be sufficient for our use case:

try:
    EnsureCommonRepositoryKey()
catch Error:
    log.warn("Can't initialize repository keys - filesystem-level backups will not work")

@Lyndon-Li
Copy link
Contributor

@fryz
We've checked the code, things are not that simple, there are multiple places in different flows. E.g., some other places are visiting the repo key, if the instance for the repo key is invalid or empty, we need to further handle the error generated by those places.

@fryz
Copy link
Author

fryz commented Aug 1, 2023

Understood - thanks for looking into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants