Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bitnami apisix v3.5.2 error: worker_events.sock failed (98: Address already in use) #30454

Open
cgonzalezITA opened this issue Nov 14, 2024 · 1 comment
Assignees
Labels
apisix tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@cgonzalezITA
Copy link

cgonzalezITA commented Nov 14, 2024

Name and Version

bitnami/apisix v3.5.2

What architecture are you using?

amd64

What steps will reproduce the bug?

In this environment

Cluster nodes:
NAME       STATUS                        ROLES           AGE   VERSION
node1   Ready                         control-plane   31d   v1.28.14
node2   Ready                         <none>          31d   v1.28.14
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/calico.yaml

With this config

Chart.yaml

apiVersion: v2
name: apisix
description: Chart holder to deploy the apisix proxy

type: application
version: 0.0.1
adependencies:
  - name: apisix
    condition: apisix.enabled
    version: 3.5.2
    repository: oci://registry-1.docker.io/bitnamicharts

values.yaml:

apisix:
...
  controlPlane:
    enabled: true
    lifecycleHooks:
      postStart:
         exec:
           command:
             - /bin/sh
             - -c
             - |
                sleep 5;
                rm /usr/local/apisix/logs/worker_events.sock

The hook is present as the pod control-plane contains it:

$ kubectl get deploy apisix-control-plane -n apisix -o yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
  containers:
    - args:
      - -p
      - /usr/local/apisix
      - -g
      - daemon off;
      command:
      - openresty
      image: docker.io/bitnami/apisix:3.11.0-debian-12-r0
      imagePullPolicy: IfNotPresent
      lifecycle:
        postStart:
          exec:
            command:
            - /bin/sh
            - -c
            - |
              sleep 5;
              rm /usr/local/apisix/logs/worker_events.sock
...

Run

helm -n apisix install -f "./Helms/apisix/values.yaml" apisix "./Helms/apisix/"  --create-namespace

See errors

Status of the pods after helm chart deployment:

$ kubectl get pod  -n apisix
---
NAME                                         READY   STATUS             RESTARTS      AGE
apisix-control-plane-9588f78df-jhkrh         0/1     CrashLoopBackOff   1 (12s ago)   88s
apisix-dashboard-66b87d67d6-qtvkp            1/1     Running            0             88s
apisix-data-plane-5869c9d7b9-6t787           0/1     Init:0/2           1 (15s ago)   88s
apisix-etcd-0                                1/1     Running            0             88s
apisix-ingress-controller-5bb7556955-kgltn   0/1     Init:0/2           1 (15s ago)   88s

Full status of the k8s artifacts deployed in the apisix namespace:

$ kubectl get all -o wide -n apisix
NAME                                             READY   STATUS     RESTARTS        AGE   IP               NODE       NOMINATED NODE   READINESS GATES
pod/apisix-control-plane-77ccf6bfd9-92t2t        0/1     Init:0/2   0               13m   <none>           node2   <none>           <none>
pod/apisix-dashboard-66b87d67d6-xq5p9            1/1     Running    0               13m   182.167.160.32   node1   <none>           <none>
pod/apisix-data-plane-6ff94b9587-2pl9h           0/1     Init:0/2   6 (3m16s ago)   13m   182.167.99.125   node2   <none>           <none>
pod/apisix-etcd-0                                1/1     Running    0               13m   182.167.160.34   node1   <none>           <none>
pod/apisix-ingress-controller-686889c889-hsng9   0/1     Init:0/2   6 (3m8s ago)    13m   182.167.160.33   node1   <none>           <none>

NAME                                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
service/apisix-control-plane        ClusterIP      10.102.194.234   <none>        9180/TCP,9280/TCP            13m   app.kubernetes.io/component=control-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
service/apisix-dashboard            ClusterIP      10.97.223.33     <none>        80/TCP,443/TCP               13m   app.kubernetes.io/component=dashboard,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
service/apisix-data-plane           LoadBalancer   10.99.14.203     <pending>     80:31845/TCP,443:31486/TCP   13m   app.kubernetes.io/component=data-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
service/apisix-etcd                 ClusterIP      10.97.33.161     <none>        2379/TCP,2380/TCP            13m   app.kubernetes.io/component=etcd,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=etcd
service/apisix-etcd-headless        ClusterIP      None             <none>        2379/TCP,2380/TCP            13m   app.kubernetes.io/component=etcd,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=etcd
service/apisix-ingress-controller   ClusterIP      10.100.196.209   <none>        80/TCP,443/TCP               13m   app.kubernetes.io/component=ingress-controller,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                  IMAGES                                                           SELECTOR
deployment.apps/apisix-control-plane        0/1     1            0           13m   apisix                      docker.io/bitnami/apisix:3.11.0-debian-12-r0                     app.kubernetes.io/component=control-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
deployment.apps/apisix-dashboard            1/1     1            1           13m   apisix-dashboard            docker.io/bitnami/apisix-dashboard:3.0.1-debian-12-r46           app.kubernetes.io/component=dashboard,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
deployment.apps/apisix-data-plane           0/1     1            0           13m   apisix                      docker.io/bitnami/apisix:3.11.0-debian-12-r0                     app.kubernetes.io/component=data-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix
deployment.apps/apisix-ingress-controller   0/1     1            0           13m   apisix-ingress-controller   docker.io/bitnami/apisix-ingress-controller:1.8.3-debian-12-r0   app.kubernetes.io/component=ingress-controller,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix

NAME                                                   DESIRED   CURRENT   READY   AGE   CONTAINERS                  IMAGES                                                           SELECTOR
replicaset.apps/apisix-control-plane-77ccf6bfd9        1         1         0       13m   apisix                      docker.io/bitnami/apisix:3.11.0-debian-12-r0                     app.kubernetes.io/component=control-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix,pod-template-hash=77ccf6bfd9
replicaset.apps/apisix-dashboard-66b87d67d6            1         1         1       13m   apisix-dashboard            docker.io/bitnami/apisix-dashboard:3.0.1-debian-12-r46           app.kubernetes.io/component=dashboard,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix,pod-template-hash=66b87d67d6
replicaset.apps/apisix-data-plane-6ff94b9587           1         1         0       13m   apisix                      docker.io/bitnami/apisix:3.11.0-debian-12-r0                     app.kubernetes.io/component=data-plane,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix,pod-template-hash=6ff94b9587
replicaset.apps/apisix-ingress-controller-686889c889   1         1         0       13m   apisix-ingress-controller   docker.io/bitnami/apisix-ingress-controller:1.8.3-debian-12-r0   app.kubernetes.io/component=ingress-controller,app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix,app.kubernetes.io/part-of=apisix,pod-template-hash=686889c889

NAME                           READY   AGE   CONTAINERS   IMAGES
statefulset.apps/apisix-etcd   1/1     13m   etcd         docker.io/bitnami/etcd:3.5.16-debian-12-r2

Logs of the crashing control-plane

$ kubectl logs -n apisix -f pod/apisix-control-plane-9588f78df-jhkrh -c wait-for-etcd
curl: (7) Failed to connect to apisix-etcd port 2379 after 1029 ms: Couldn't connect to server
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    45  100    45    0     0  19247      0 --:--:-- --:--:-- --:--:-- 22500
{"etcdserver":"3.5.16","etcdcluster":"3.5.0"}
Connected to http://apisix-etcd:2379
Connection success

$ kubectl logs -n apisix -f pod/apisix-control-plane-9588f78df-jhkrh -c apisix
2024/11/14 09:09:55 [emerg] 1#1: bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
nginx: [emerg] bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
2024/11/14 09:09:55 [emerg] 1#1: bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
nginx: [emerg] bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
...

Are you using any custom parameters or values?

Initially, the error appeared with the most basic controlPlane's configuration:

apisix:
...
  controlPlane:
    enabled: true

Later, using the solution proposed at issue [bitnami/apisix] failed on restart of container, the error is still apearing:

apisix:
...
  controlPlane:
    enabled: true
    lifecycleHooks:
      postStart:
         exec:
           command:
             - /bin/sh
             - -c
             - |
                sleep 5;
                rm /usr/local/apisix/logs/worker_events.sock

What is the expected behavior?

Correct deployment of all the pods

What do you see instead?

$ kubectl get pod  -n apisix
---
NAME                                         READY   STATUS             RESTARTS      AGE
apisix-control-plane-9588f78df-jhkrh         0/1     CrashLoopBackOff   1 (12s ago)   88s
apisix-dashboard-66b87d67d6-qtvkp            1/1     Running            0             88s
apisix-data-plane-5869c9d7b9-6t787           0/1     Init:0/2           1 (15s ago)   88s
apisix-etcd-0                                1/1     Running            0             88s
apisix-ingress-controller-5bb7556955-kgltn   0/1     Init:0/2           1 (15s ago)   88s

$ kubectl logs -n apisix -f pod/apisix-control-plane-9588f78df-jhkrh -c apisix
2024/11/14 09:09:55 [emerg] 1#1: bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
nginx: [emerg] bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
2024/11/14 09:09:55 [emerg] 1#1: bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
nginx: [emerg] bind() to unix:/usr/local/apisix/logs/worker_events.sock failed (98: Address already in use)
...

Additional information

The same configuration deployed in a different server, on a minikube v1.25.2 instance is working without any problem:

$ minikube version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7
@cgonzalezITA cgonzalezITA added the tech-issues The user has a technical issue about an application label Nov 14, 2024
@github-actions github-actions bot added the triage Triage is needed label Nov 14, 2024
@carrodher
Copy link
Member

Hi, the issue may not be directly related to the Bitnami container image/Helm chart, but rather to how the application is being utilized, configured in your specific environment, or tied to a particular scenario that is not easy to reproduce on our side.

If you think that's not the case and want to contribute a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
apisix tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants