You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We use terraform to create the datacenter objects in kubernetes. When a backup is applied, the medusa-operator adds RESTORE_KEY and BACKUP_NAME env variables to the medusa container. Next time we try to apply terraform this appears as a change and terraform even fails to apply this change due to field_manager conflicts.
Describe the solution you'd like
After a restore is complete, medusa-operator should remove the environment variables it has added.
╷
│ Error: There was a field manager conflict when trying to apply the manifest for "databases/cassandra-1"
│
│ with module.cassandra_1.kubernetes_manifest.cassandra_datacenter,
│ on ../../../../tf-modules/eks-cassandra-datacenter/cassandra_datacenter.tf line 15, in resource "kubernetes_manifest" "cassandra_datacenter":
│ 15: resource "kubernetes_manifest" "cassandra_datacenter" {
│
│ The API returned the following conflict: "Apply failed with 1 conflict: conflict with \"manager\" using cassandra.datastax.com/v1beta1: .spec.podTemplateSpec.spec.initContainers"
│
│ You can override this conflict by setting "force_conflicts" to true in the "field_manager" block.
╵
Using force_conflicts seems a bit dangerous given that will overwrite anything else that could be important.
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1183
┆priority: Medium
The text was updated successfully, but these errors were encountered:
Hi, yes I performed a restore. But those env variables are left behind after the restore is performed. In my team we use terraform to manage kubernetes resources, terraform started noticing these new changes. But we've since migrated to k8ssandra-operator v2 and we no longer manage this resource ourselves, so this is not a problem for us anymore. Keeping this open in case someone else has this problem.
This is indeed problematic: After performing a successful restore, I can't scale the cluster up (add new nodes by increasing the 'size' in the datacenter) because new pods crash with "no such backup".
How can I remove these environment variables manually after completing the restore so I can scale the datacenter?
Is your feature request related to a problem? Please describe.
We use terraform to create the datacenter objects in kubernetes. When a backup is applied, the medusa-operator adds
RESTORE_KEY
andBACKUP_NAME
env variables to the medusa container. Next time we try to apply terraform this appears as a change and terraform even fails to apply this change due to field_manager conflicts.Describe the solution you'd like
After a restore is complete, medusa-operator should remove the environment variables it has added.
Describe alternatives you've considered
None.
Additional context
The diff from terraform:
Error from terraform on apply failure:
Using
force_conflicts
seems a bit dangerous given that will overwrite anything else that could be important.┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1183
┆priority: Medium
The text was updated successfully, but these errors were encountered: