You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using an internal PKI ultimately used via cert-manager generating the secrets used in the setup.
Our PKI returns the certificate chains in ca.crt up to last intermediate signing authority.
This requires us to be in full control of the "admin" client cert (using the same PKI), therefore we inject it via the undocumented (!) usage of .Values.tls.certs.clientSecretRef, which is also handed over to the console.
Resulting in following values.yaml (only relevant -- hopefully) parts included:
Schema_registry_client looks the same.
We can see the truststore_file points to the tls.default ca, but should actually support the trustStore settings.
Console pod is not starting properly because pandaproxy and schemaregistries are non-functional because they cannot connect the internal cluster api.
What did you expect to happen?
A properly configured pandaproxy and schemaregistry able to authenticate against the cluster api, and validate with its certificated with a provided truststore file (as a Kubernetes secrets) just like for the listeners.
How can we reproduce it (as minimally and precisely as possible)?. Please include values file.
What happened?
Configuring mTLS with Kubernetes Secrets, following this guide: https://docs.redpanda.com/current/manage/kubernetes/security/tls/k-secrets/
We are using an internal PKI ultimately used via cert-manager generating the secrets used in the setup.
Our PKI returns the certificate chains in
ca.crt
up to last intermediate signing authority.The root certificate of our company needs provided in the
trustStore
configuration (https://docs.redpanda.com/current/manage/kubernetes/security/tls/k-secrets/#configure-a-truststore)Then, we intend to enable mTLS based authorisation using a specific
kafka_mtls_principal_mapping_rules
: https://docs.redpanda.com/current/manage/security/authentication/#mtlsThis requires us to be in full control of the "admin" client cert (using the same PKI), therefore we inject it via the undocumented (!) usage of
.Values.tls.certs.clientSecretRef
, which is also handed over to the console.Resulting in following values.yaml (only relevant -- hopefully) parts included:
Resulting into a running cluster and functional cluster:
but the pandaproxy and schemaregistry are not functional:
Logs in RedPanda:
Looking at
redpanda.yaml
:Schema_registry_client looks the same.
We can see the truststore_file points to the tls.default ca, but should actually support the trustStore settings.
Console pod is not starting properly because pandaproxy and schemaregistries are non-functional because they cannot connect the internal cluster api.
What did you expect to happen?
A properly configured pandaproxy and schemaregistry able to authenticate against the cluster api, and validate with its certificated with a provided truststore file (as a Kubernetes secrets) just like for the listeners.
How can we reproduce it (as minimally and precisely as possible)?. Please include values file.
See above, relevant values.yaml provided
Anything else we need to know?
https://github.com/redpanda-data/helm-charts/blob/main/charts/redpanda/configmap.tpl.go#L408
should probably read:
Which are the affected charts?
No response
Chart Version(s)
Cloud provider
None, on-prem setup.
JIRA Link: K8S-395
The text was updated successfully, but these errors were encountered: