Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MongoDB CrashLoopBackOff Pod #15

Open
skarkarof opened this issue Mar 5, 2024 · 3 comments
Open

MongoDB CrashLoopBackOff Pod #15

skarkarof opened this issue Mar 5, 2024 · 3 comments

Comments

@skarkarof
Copy link

skarkarof commented Mar 5, 2024

Hello,

I'm following the Self-Hosted Installation Wiki for Kerberos Factory.

I'm using an Ubuntu 22.04 VPS
I followed the entire step by step, but the MongoDB POD does not work.

mongodb mongodb-5b7545f4f-gc986 0/1 CrashLoopBackOff 13 (4m57s ago) 46m

See images below to check for errors. I've been racking my brain for hours and I can't figure it out.

Captura de tela de 2024-03-05 17-37-37

kubectl describe pod mongodb-5b7545f4f-gc986 -n mongodb

Name: mongodb-5b7545f4f-gc986
Namespace: mongodb
Priority: 0
Service Account: mongodb
Node: nw-kerberos/10.158.0.11
Start Time: Tue, 05 Mar 2024 16:43:51 -0300
Labels: app.kubernetes.io/component=mongodb
app.kubernetes.io/instance=mongodb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=mongodb
app.kubernetes.io/version=7.0.6
helm.sh/chart=mongodb-14.12.3
pod-template-hash=5b7545f4f
Annotations: cni.projectcalico.org/containerID: c1834c039d7196a8338e476ff8803a31a4cc684a2847e28938553837acd71265
cni.projectcalico.org/podIP: 10.244.38.218/32
cni.projectcalico.org/podIPs: 10.244.38.218/32
Status: Running
IP: 10.244.38.218
IPs:
IP: 10.244.38.218
Controlled By: ReplicaSet/mongodb-5b7545f4f
Containers:
mongodb:
Container ID: cri-o://8e4c050f500a049ddff098c25b6885e426655ce1ece907f43b320b04d25a4d75
Image: docker.io/bitnami/mongodb:4.4.15-debian-10-r8
Image ID: docker.io/bitnami/mongodb@sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209
Port: 27017/TCP
Host Port: 0/TCP
SeccompProfile: RuntimeDefault
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 05 Mar 2024 17:35:27 -0300
Finished: Tue, 05 Mar 2024 17:35:27 -0300
Ready: False
Restart Count: 15
Liveness: exec [/bitnami/scripts/ping-mongodb.sh] delay=30s timeout=10s period=20s #success=1 #failure=6
Readiness: exec [/bitnami/scripts/readiness-probe.sh] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
BITNAMI_DEBUG: false
MONGODB_ROOT_USER: root
MONGODB_ROOT_PASSWORD: <set to the key 'mongodb-root-password' in secret 'mongodb'> Optional: false
ALLOW_EMPTY_PASSWORD: no
MONGODB_SYSTEM_LOG_VERBOSITY: 0
MONGODB_DISABLE_SYSTEM_LOG: no
MONGODB_DISABLE_JAVASCRIPT: no
MONGODB_ENABLE_JOURNAL: yes
MONGODB_PORT_NUMBER: 27017
MONGODB_ENABLE_IPV6: no
MONGODB_ENABLE_DIRECTORY_PER_DB: no
Mounts:
/bitnami/mongodb from datadir (rw)
/bitnami/scripts from common-scripts (rw)
/opt/bitnami/mongodb/conf from empty-dir (rw,path="app-conf-dir")
/opt/bitnami/mongodb/logs from empty-dir (rw,path="app-logs-dir")
/opt/bitnami/mongodb/tmp from empty-dir (rw,path="app-tmp-dir")
/tmp from empty-dir (rw,path="tmp-dir")
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
empty-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
common-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mongodb-common-scripts
Optional: false
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodb
ReadOnly: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 54m default-scheduler Successfully assigned mongodb/mongodb-5b7545f4f-gc986 to nw-kerberos
Normal Pulled 52m (x5 over 54m) kubelet Container image "docker.io/bitnami/mongodb:4.4.15-debian-10-r8" already present on machine
Normal Created 52m (x5 over 54m) kubelet Created container mongodb
Normal Started 52m (x5 over 54m) kubelet Started container mongodb
Warning BackOff 4m (x244 over 54m) kubelet Back-off restarting failed container mongodb in pod mongodb-5b7545f4f-gc986_mongodb(fc6aa2b7-5e49-42cd-a94d-202a5e241260)


LOGS:
root@nw-kerberos:/home/kuesttman/factory/kubernetes/mongodb# k logs pods/mongodb-5b7545f4f-gc986 -n mongodb
mongodb 20:40:28.58
mongodb 20:40:28.59 Welcome to the Bitnami mongodb container
mongodb 20:40:28.59 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 20:40:28.59 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 20:40:28.60
mongodb 20:40:28.60 INFO ==> ** Starting MongoDB setup **
mongodb 20:40:28.63 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 20:40:28.70 INFO ==> Initializing MongoDB...
sed: can't read /opt/bitnami/mongodb/conf/mongodb.conf: No such file or directory

Captura de tela de 2024-03-05 17-41-50

@skarkarof
Copy link
Author

Can anyone help?

@oliviawindsir
Copy link

Hey I am also having some problem with my MongoDB pod, but rather different. Mine is in Pending state rather than CrashLoopBackOff. I have not reached any solutions for mine, but I thought maybe you should see if your load balancer service are all running well?

Will you be able to post your kubectl get all -A?

@oliviawindsir
Copy link

oliviawindsir commented May 27, 2024

Hey I am also having some problem with my MongoDB pod, but rather different. Mine is in Pending state rather than CrashLoopBackOff. I have not reached any solutions for mine, but I thought maybe you should see if your load balancer service are all running well?

Will you be able to post your kubectl get all -A?

I ended up with the same issue and bumped into this ticket in bitnami repo here. However, I took another workaround which is to set up MongoDB atlas and create a free DB to point to. So I skipped setting up the mongoDB using helm chart provided by bitnami and just run the configmap set up to point to my hosted mongoDB. Now my system works.

Update : @skarkarof The above MongoDB Atlas won't work because kerberos factory is not able to resolve the IP of Atlas cluster. You will still need to create a MongoDB using bitnami which will result in one single IP so that kerberos factory can access it.

My working solution is to choose the release that is not affected by the issue reported in the issue logged in bitnami repo. I used helm install mongodb -n mongodb bitnami/mongodb --version 14.8.0 --values ./mongodb/values.yaml and it works. You will need to update mongodb.config.yaml to change the MONGODB_HOST to mongodb.mongodb.svc.cluster.local.

To check if it is successful, do a kubectl logs on your factory pods. You should see the following message :

Successfully Opened ./config/global.json                                                                                                                                                                    
Successfully Opened ./config/template.json                                                                                                                                                                  
{"level":"info","msg":"Running Kerberos Factory on :80","time":"2024-06-05T17:20:19+02:00"}  

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants