Skip to content
This repository has been archived by the owner on Oct 23, 2024. It is now read-only.

some help with marathon 1.11 and mvp csi secrets, do not seem to be send. #7261

Open
f1-outsourcing opened this issue Dec 31, 2020 · 1 comment

Comments

@f1-outsourcing
Copy link

I have been looking forward to the update of mesos offering this mvp csi, mainly to finally be able to use ceph. But unfortunately I am still not able to get a simple rbd image attached to a container.

I am able to use the csilvm by adding the volume like this[2], but the cephcsi keeps failing. It looks like the secrets are not being send to the driver, it keeps complaining about 'stage secrets cannot be nil or empty'[1], with this config[3] having staging secrets. I have also tried using a secrets plugin doing something like "username": { "secret": "secretpassword"}. Any hints on what I am doing wrong are very welcome!

PS. I have been using the csc command line tool with this cephcsi driver and then secrets are parsed correctly. So I do not assume it has anything to do with the driver.

[1]

I1221 21:54:36.932030   10356 utils.go:132] ID: 14 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC call: /csi.v1.Node/NodeStageVolume
I1221 21:54:36.932302   10356 utils.go:133] ID: 14 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC request: {"staging_target_path":"/var/lib/mesos/csi/rbd.csi.ceph.io/default/mounts/0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1/staging","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"clusterID":"ceph","pool":"app"},"volume_id":"0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1"}
E1221 21:54:36.932316   10356 utils.go:136] ID: 14 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC error: rpc error: code = InvalidArgument desc = stage secrets cannot be nil or empty
I1221 21:54:36.976159   10356 utils.go:132] ID: 15 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC call: /csi.v1.Node/NodeUnstageVolume
I1221 21:54:36.976308   10356 utils.go:133] ID: 15 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC request: {"staging_target_path":"/var/lib/mesos/csi/rbd.csi.ceph.io/default/mounts/0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1/staging","volume_id":"0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1"}
I1221 21:54:36.976465   10356 nodeserver.go:666] ID: 15 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 failed to find image metadata: missing stash: open /var/lib/mesos/csi/rbd.csi.ceph.io/default/mounts/0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1/staging/image-meta.json: no such file or directory
I1221 21:54:36.976537   10356 utils.go:138] ID: 15 Req-ID: 0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1 GRPC response: {}

[3]

"volumes": [
      {
        "containerPath": "xxx",
        "mode": "rw",
        "external": {
          "provider": "csi",
		  "name": "0001-0004-ceph-0000000000000016-7957e938-405a-11eb-bfd0-0050563001a1",
          "options": { 
            "pluginName": "rbd.csi.ceph.io",
            "capability": {
              "accessType": "block",
              "accessMode": "SINGLE_NODE_WRITER",
              "fsType": ""
            },
			"volumeContext": {
              "clusterID": "ceph",
              "pool": "app" 
            },
		"nodeStageSecret": {
              "username": "userID",
              "password": "asdfasdfasdfasdfasdfasdf"
            }
          }
        }
      }
    ]

[2]

"volumes": [
      {
        "containerPath": "xxx",
        "mode": "rw",
        "external": {
          "provider": "csi",
          "name": "LVtestman1",
          "options": { 
            "pluginName": "lvm.csi.mesosphere.io",
            "capability": {
              "accessType": "mount",
              "accessMode": "SINGLE_NODE_WRITER",
              "fsType": "xfs" 
            }

          }
        }
      }
    ]
@f1-outsourcing
Copy link
Author

@timcharper

Hi Tim do you have some idea what this could be, before total support dies on this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant