Replies: 4 comments 12 replies
-
Sounds like you are using some wrong image. Are you sure the image is for the 0.40.0 Strimzi version? Does it have this version in its name? |
Beta Was this translation helpful? Give feedback.
-
@ppatierno @scholzj Here is the StrimziPodSet definition.
apiVersion: core.strimzi.io/v1beta2
kind: StrimziPodSet
metadata:
annotations:
strimzi.io/kafka-version: 3.7.0
strimzi.io/storage: '{"type":"persistent-claim","size":"10Gi","class":"tanzu","deleteClaim":true}'
creationTimestamp: "2024-04-06T21:29:51Z"
generation: 1
labels:
app.kubernetes.io/instance: demo1-cluster
app.kubernetes.io/managed-by: strimzi-cluster-operator
app.kubernetes.io/name: kafka
app.kubernetes.io/part-of: strimzi-demo1-cluster
strimzi.io/cluster: demo1-cluster
strimzi.io/component-type: kafka
strimzi.io/kind: Kafka
strimzi.io/name: demo1-cluster-kafka
strimzi.io/pool-name: kafka
name: demo1-cluster-kafka
namespace: demo1
ownerReferences:
- apiVersion: kafka.strimzi.io/v1beta2
blockOwnerDeletion: false
controller: false
kind: Kafka
name: demo1-cluster
uid: 9d816ab8-66e4-4aea-8cb8-5b393269ea80
resourceVersion: "25007770"
uid: cedbb9ff-d960-4c01-b2fa-f5ab08f03535
spec:
pods:
- apiVersion: v1
kind: Pod
metadata:
annotations:
strimzi.io/broker-configuration-hash: ba90a659
strimzi.io/clients-ca-cert-generation: "0"
strimzi.io/cluster-ca-cert-generation: "0"
strimzi.io/cluster-ca-key-generation: "0"
strimzi.io/inter-broker-protocol-version: "3.7"
strimzi.io/kafka-version: 3.7.0
strimzi.io/log-message-format-version: "3.7"
strimzi.io/logging-appenders-hash: e893ac9f
strimzi.io/revision: 9a8dcda3
strimzi.io/server-cert-hash: 009259e26a548e1e49aa40edee25062ad7209e5d
labels:
app.kubernetes.io/instance: demo1-cluster
app.kubernetes.io/managed-by: strimzi-cluster-operator
app.kubernetes.io/name: kafka
app.kubernetes.io/part-of: strimzi-demo1-cluster
statefulset.kubernetes.io/pod-name: demo1-cluster-kafka-0
strimzi.io/broker-role: "true"
strimzi.io/cluster: demo1-cluster
strimzi.io/component-type: kafka
strimzi.io/controller: strimzipodset
strimzi.io/controller-name: demo1-cluster-kafka
strimzi.io/controller-role: "false"
strimzi.io/kind: Kafka
strimzi.io/name: demo1-cluster-kafka
strimzi.io/pod-name: demo1-cluster-kafka-0
strimzi.io/pool-name: kafka
name: demo1-cluster-kafka-0
namespace: demo1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- CRX
containers:
- args:
- /opt/kafka/kafka_run.sh
env:
- name: KAFKA_METRICS_ENABLED
value: "false"
- name: STRIMZI_KAFKA_GC_LOG_ENABLED
value: "false"
- name: STRIMZI_DYNAMIC_HEAP_PERCENTAGE
value: "50"
- name: STRIMZI_DYNAMIC_HEAP_MAX
value: "5368709120"
image: quay.io/strimzi/kafka:0.40.0-kafka-3.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /opt/kafka/kafka_liveness.sh
initialDelaySeconds: 15
timeoutSeconds: 5
name: kafka
ports:
- containerPort: 9090
name: tcp-ctrlplane
protocol: TCP
- containerPort: 9091
name: tcp-replication
protocol: TCP
- containerPort: 9092
name: tcp-clients
protocol: TCP
- containerPort: 9093
name: tcp-clientstls
protocol: TCP
readinessProbe:
exec:
command:
- /opt/kafka/kafka_readiness.sh
initialDelaySeconds: 15
timeoutSeconds: 5
resources:
limits:
memory: 2Gi
requests:
memory: 512Mi
volumeMounts:
- mountPath: /var/lib/kafka/data
name: data
- mountPath: /tmp
name: strimzi-tmp
- mountPath: /opt/kafka/cluster-ca-certs
name: cluster-ca
- mountPath: /opt/kafka/broker-certs
name: broker-certs
- mountPath: /opt/kafka/client-ca-certs
name: client-ca-cert
- mountPath: /opt/kafka/custom-config/
name: kafka-metrics-and-logging
- mountPath: /var/opt/kafka
name: ready-files
hostname: demo1-cluster-kafka-0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 0
serviceAccountName: demo1-cluster-kafka
subdomain: demo1-cluster-kafka-brokers
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: data-demo1-cluster-kafka-0
- emptyDir:
medium: Memory
sizeLimit: 5Mi
name: strimzi-tmp
- name: cluster-ca
secret:
defaultMode: 292
secretName: demo1-cluster-cluster-ca-cert
- name: broker-certs
secret:
defaultMode: 292
secretName: demo1-cluster-kafka-brokers
- name: client-ca-cert
secret:
defaultMode: 292
secretName: demo1-cluster-clients-ca-cert
- configMap:
name: demo1-cluster-kafka-0
name: kafka-metrics-and-logging
- emptyDir:
medium: Memory
sizeLimit: 1Ki
name: ready-files
selector:
matchLabels:
strimzi.io/cluster: demo1-cluster
strimzi.io/kind: Kafka
strimzi.io/name: demo1-cluster-kafka
strimzi.io/pool-name: kafka
status:
currentPods: 1
observedGeneration: 1
pods: 1
readyPods: 0 |
Beta Was this translation helpful? Give feedback.
-
By editing and removing the Upon Here is the dir listing -
There is no Here is the dir listing of
Here is the listing of the two files #!/usr/bin/env bash
set -e
source ./kraft_utils.sh
USE_KRAFT=$(useKRaft)
if [ "$USE_KRAFT" == "true" ]; then
for proc in /proc/*[0-9];
do if readlink -f "$proc"/exe | grep -q java; then exit 0; fi;
done
else
# Test ZK-based broker liveness
# We expect that either the broker is ready and listening on 9091 (replication port)
# or it has a ZK session
if [ -f /var/opt/kafka/kafka-ready ] ; then
rm -f /var/opt/kafka/zk-connected 2&> /dev/null
# Test listening on replication port 9091
netstat -lnt | grep -Eq 'tcp6?[[:space:]]+[0-9]+[[:space:]]+[0-9]+[[:space:]]+[^ ]+:9091.*LISTEN[[:space:]]*'
else
# Not yet ready, so test ZK connected state
test -f /var/opt/kafka/zk-connected
fi
fi
#!/usr/bin/env bash
set -e
source ./kraft_utils.sh
USE_KRAFT=$(useKRaft)
if [ "$USE_KRAFT" == "true" ]; then
# Test KRaft broker/controller readiness
. ./kafka_readiness_kraft.sh
else
# Test ZK-based broker readiness
# The kafka-agent will create /var/opt/kafka/kafka-ready in the container when the broker
# state is >= 3 && != 127 (UNKNOWN state)
test -f /var/opt/kafka/kafka-ready
fi IMHO, the |
Beta Was this translation helpful? Give feedback.
-
@scholzj @ppatierno I deployed on a GKE cluster and I get a similar failure of the liveness and readiness probe failing.
|
Beta Was this translation helpful? Give feedback.
-
Bug Description
Kafka pod fails to start.
Looks like
/opt/kafka/kafka_liveness.sh
is calling./kraft_utils.sh
without setting the current directory to/opt/kafka
.Steps to reproduce
No response
Expected behavior
No response
Strimzi version
0.40.0
Kubernetes version
1.26
Installation method
YAML files
Infrastructure
vSphere with Tanzu
Configuration files and logs
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions