Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automount did not work - OpenShift Version 4.16.17 #29237

Open
n00bsi opened this issue Oct 26, 2024 · 0 comments
Open

Automount did not work - OpenShift Version 4.16.17 #29237

n00bsi opened this issue Oct 26, 2024 · 0 comments

Comments

@n00bsi
Copy link

n00bsi commented Oct 26, 2024

[provide a description of the issue]

Version

[provide output of the openshift version or oc version command]

$ oc version
Client Version: 4.16.17
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.16.17
Kubernetes Version: v1.29.8+632b078

Cluster Version: 4.16.17

Steps To Reproduce

storage-mcp.yaml

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: storage
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,storage]}
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/storage: ""

auto-mount-machineconfig.yaml

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: storage
  name: 71-mount-storage-worker
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - name: var-mnt-longhorn.mount
          enabled: true
          contents: |
            [Unit]
            Before=local-fs.target
            [Mount]
            # Example mount point, you can change it to where you like for each device.
            Where=/var/mnt/longhorn
            What=/dev/disk/by-label/longhorn
            Options=rw,relatime,discard
            [Install]
            WantedBy=local-fs.target

$ oc get nodes
NAME          STATUS   ROLES                  AGE   VERSION
cp1              Ready    control-plane,master   17h   v1.29.8+632b078
cp2              Ready    control-plane,master   17h   v1.29.8+632b078
cp3              Ready    control-plane,master   17h   v1.29.8+632b078
wn1             Ready    storage,worker         17h   v1.29.8+632b078
wn2             Ready    storage,worker         17h   v1.29.8+632b078
wn3             Ready    storage,worker         17h   v1.29.8+632b078

  1. label disk with lonhorn
# ls /dev/disk/by-label/longhorn
/dev/disk/by-label/longhorn

oc label node wn1 node-role.kubernetes.io/storage=
oc label node wn2 node-role.kubernetes.io/storage=
oc label node wn3 node-role.kubernetes.io/storage=

  1. oc apply -f storage-mcp.yaml

  2. oc apply -f auto-mount-machineconfig.yaml

oc annotate node wn1 --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
oc label node wn1 --overwrite node.longhorn.io/create-default-disk=config

oc annotate node wn2 --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
oc label node wn2 --overwrite node.longhorn.io/create-default-disk=config

oc annotate node wn3 --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
oc label node wn3 --overwrite node.longhorn.io/create-default-disk=config

  1. login to all worker nodes create dir /var/mnt/longhorn

  2. reboot earch worker node

Current Result

the Disk is not mounted

Expected Result

auto mount this disk

Additional Information

Manual mount works:


sh-5.1# mount /dev/disk/by-label/longhorn /var/mnt/longhorn/

sh-5.1# df -h /var/mnt/longhorn/
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        118G   24K  112G   1% /var/mnt/longhorn

$ oc get co

machine-config 4.16.17 True False True 18h Failed to resync 4.16.17 because: error during syncRequiredMachineConfigPools: [context deadline exceeded, failed to update clusteroperator: [client rate limiter Wait returned an error: context deadline exceeded, error MachineConfigPool storage is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]]

What did I wrong ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant