Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backupstoragelocation can't be completely configured via Helm #294

Open
aceeric opened this issue Aug 16, 2021 · 7 comments
Open

backupstoragelocation can't be completely configured via Helm #294

aceeric opened this issue Aug 16, 2021 · 7 comments
Labels
good first issue Good for newcomers velero

Comments

@aceeric
Copy link

aceeric commented Aug 16, 2021

I'm installing velero using the helm chart https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-2.23.3 with Minio as the backing S3 service.

The backupstoragelocation was never progressing to the Available phase and backups were failing. I looked at this documentation: https://velero.io/docs/v1.6/troubleshooting/#is-velero-using-the-correct-cloud-credentials, under the Troubleshooting BackupStorageLocation credentials header, regarding the .spec.credential.key and .spec.credential.name fields. So I hand-patched those into the backupstoragelocation in the cluster with values cloud and cloud-credentials respectively and suddenly everything worked. (I was already patching in the caCert field.)

Problem is, the helm chart does not appear to provide a way to do that. The backupstoragelocation.yaml in the templates directory and the values.yaml do not appear to have have a way to specify this, so it looks like I need to patch it after the chart deploys. Do you think I'm missing something?

Thanks.

@zubron zubron transferred this issue from vmware-tanzu/velero Aug 23, 2021
@jenting jenting added the velero label Aug 26, 2021
@jenting
Copy link
Collaborator

jenting commented Sep 9, 2021

In general, when I deploy Velero, I'll prepare the credential-velero locally. And when helm install, specify the credential as

helm install velero \
   ...
   --set-file credentials.secretContents.cloud=credentials-velero \
   ---

Or, you could use the pre-existing secret key, you could use https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L273.

Or, you could specify it in https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L282-L287.

@jgilfoil
Copy link

I think what he's saying is, entries here https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L273 don't get applied to https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.6/charts/velero/templates/backupstoragelocation.yaml . I have the credentials.existingSecret value set here https://github.com/jgilfoil/k8s-gitops/blob/main/cluster/apps/velero/helm-release.yaml#L46-L47, however the resulting object in my cluster doesn't get the credentials value applied:

vagrant@control:/code/k8s-gitops$ kubectl get backupstoragelocations -n velero default -o yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  annotations:
    helm.sh/hook: post-install,post-upgrade,post-rollback
    helm.sh/hook-delete-policy: before-hook-creation
  creationTimestamp: "2021-09-12T01:33:21Z"
  generation: 2
  labels:
    app.kubernetes.io/instance: velero
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: velero
    helm.sh/chart: velero-2.23.6
  name: default
  namespace: velero
  resourceVersion: "26461410"
  uid: 2ffbd94e-3649-4192-b1f6-26593c1ba426
spec:
  config:
    region: us-east-1
    s3ForcePathStyle: "true"
    s3Url: http://<minio_address>:9000
  default: true
  objectStorage:
    bucket: velero
  provider: aws
status:
  lastValidationTime: "2021-09-12T18:05:08Z"
  phase: Unavailable

The credential exists and is mounted to the pods however:

vagrant@control:/code/k8s-gitops$ kubectl -n velero describe pod -l name=velero 
Name:         velero-67c547d658-bvtv7
Namespace:    velero
Priority:     0
... < snipped for brevity>
    Mounts:
      /credentials from cloud-credentials (rw)
      /plugins from plugins (rw)
      /scratch from scratch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from velero-server-token-ssrf2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  cloud-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  velero-s3-creds
    Optional:    false
  plugins:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  scratch:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  velero-server-token-ssrf2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  velero-server-token-ssrf2
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

vagrant@control:/code/k8s-gitops$ kubectl describe secret -n velero velero-s3-creds
Name:         velero-s3-creds
Namespace:    velero
Labels:       kustomize.toolkit.fluxcd.io/name=apps
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  kustomize.toolkit.fluxcd.io/checksum: f5f71438f60014cebb79536703be7606547cc615

Type:  Opaque

Data
====
cloud:  86 bytes

@jgilfoil
Copy link

Btw, for what it's worth, the issue I was having that led me here, actually had nothing to do with the credentials not being attached to the backupstoragelocation. I got my backups working without that being set, which kinda leads me to believe that the troubleshooting steps https://velero.io/docs/v1.6/troubleshooting/#troubleshooting-backupstoragelocation-credentials are incorrect, as it works just fine without those creds being set there.

@cqc5511
Copy link

cqc5511 commented Oct 5, 2021

Seeing a similar issue as above. If we set velero credentials.useSecret to false, the BSL is still including a credentials and credential.key field and because the secret does not exist the BSL is not becoming available without manual intervention to remove the credential.

error="unable to get credentials: unable to get key for secret: Secret \"\" not found" error.file="/go/src/github.com/vmware-tanzu/velero/internal/credentials/file_store.go:69" error.function="github.com/vmware-tanzu/velero/internal/credentials.(*namespacedFileStore).Path" logSource="pkg/controller/backup_sync_controller.go:175"

@jenting jenting added good first issue Good for newcomers and removed pending user response labels Oct 13, 2021
@demisx
Copy link

demisx commented Dec 31, 2021

I couldn't get it working iwth existingSecret: bsl-credentials creating secret with aws key per docs:

kubectl create secret generic -n velero bsl-credentials --from-file=aws=/tmp/bsl-credentials.txt

It worked when I changed the aws key to cloud, though:

kubectl create secret generic -n velero bsl-credentials --from-file=cloud=/tmp/bsl-credentials.txt

@jenting
Copy link
Collaborator

jenting commented Jan 3, 2022

I couldn't get it working iwth existingSecret: bsl-credentials creating secret with aws key per docs:

kubectl create secret generic -n velero bsl-credentials --from-file=aws=/tmp/bsl-credentials.txt

It worked when I changed the aws key to cloud, though:

kubectl create secret generic -n velero bsl-credentials --from-file=cloud=/tmp/bsl-credentials.txt

🤔 Probably the error in the plugin's README.
But the helm chart values.yaml indicates that the key should be cloud.

@Rohmilchkaese
Copy link

Push for that one - it still seems to be an issue!

Also take a look at #6601.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers velero
Projects
None yet
Development

No branches or pull requests

7 participants
@jgilfoil @demisx @cqc5511 @aceeric @Rohmilchkaese @jenting and others