-
Notifications
You must be signed in to change notification settings - Fork 549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-1, Operation not permitted #4563
Comments
@wangchao732 the ceph user you have created is having the required access as per https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md#cephfs? |
Thks, but get error messges Error EINVAL: mds capability parse failed, stopped at 'fsname=cephfs path=/volumes, allow rws fsname=cephfs path=/volumes/csi' of 'allow r fsname=cephfs path=/volumes, allow rws fsname=cephfs path=/volumes/csi' |
have you created the csi subvolumegroup in the filesystem? if not please create it and then try to create the user |
When I finish executing ceph auth, it seems to be created by default because I see the directory. |
ceph fs subvolume ls cephfs |
ceph fs subvolume info cephfs volume1 csi |
rbd-ceph-csi same issue failed to provision volume with StorageClass "csi-rbd-sc": rpc error: code = Internal desc = failed to get connection: connecting failed: rados: ret=-1, Operation not permitted |
I0418 10:12:53.258063 1 utils.go:198] ID: 30 GRPC call: /csi.v1.Identity/Probe I0418 10:12:53.258141 1 utils.go:199] ID: 30 GRPC request: {} I0418 10:12:53.258175 1 utils.go:205] ID: 30 GRPC response: {} I0418 10:13:46.083975 1 utils.go:198] ID: 31 Req-ID: pvc-051787a2-cdad-4e98-9562-01b43ce55a8e GRPC call: /csi.v1.Controller/CreateVolume I0418 10:13:46.085127 1 utils.go:199] ID: 31 Req-ID: pvc-051787a2-cdad-4e98-9562-01b43ce55a8e GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-051787a2-cdad-4e98-9562-01b43ce55a8e","parameters":{"clusterID":"80a8efd7-8ed5-4e53-bc5b-f91c56300e99","csi.storage.k8s.io/pv/name":"pvc-051787a2-cdad-4e98-9562-01b43ce55a8e","csi.storage.k8s.io/pvc/name":"bytebase-pvc","csi.storage.k8s.io/pvc/namespace":"bytebase","imageFeatures":"layering","pool":"k8s-store"},"secrets":"stripped","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["discard"]}},"access_mode":{"mode":1}}]} I0418 10:13:46.085678 1 rbd_util.go:1315] ID: 31 Req-ID: pvc-051787a2-cdad-4e98-9562-01b43ce55a8e setting disableInUseChecks: false image features: [layering] mounter: rbd E0418 10:13:46.106773 1 controllerserver.go:232] ID: 31 Req-ID: pvc-051787a2-cdad-4e98-9562-01b43ce55a8e failed to connect to volume : failed to get connection: connecting failed: rados: ret=-1, Operation not permitted E0418 10:13:46.106850 1 utils.go:203] ID: 31 Req-ID: pvc-051787a2-cdad-4e98-9562-01b43ce55a8e GRPC error: rpc error: code = Internal desc = failed to get connection: connecting failed: rados: ret=-1, Operation not permitted |
@wangchao732 can you do rados operations with the above ceph users? |
I don't know what's going on, no matter how I try I get the same error. 2024-04-18 18:27:52.867588 2024-04-18 18:27:52.861771 2024-04-18 18:27:01.821352 ceph osd lspools |
rados lspools |
rados ls -p k8s-store |
Executed in a k8s cluster: rados ls -p k8s-store |
please pass |
rados ls -p k8s-store --keyring /etc/ceph/ceph.client.csi-rbd.keyring --name client.csi-rbd |
can you also check if you are able to do write write operation in the pool you are planning to use https://docs.ceph.com/en/latest/man/8/rados/#examples |
The test was successful. kind: Secret |
csi-rbdplugin log: I0418 12:04:35.713123 1 utils.go:198] ID: 47 Req-ID: pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9 GRPC call: /csi.v1.Controller/CreateVolume I0418 12:04:35.713486 1 utils.go:199] ID: 47 Req-ID: pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9 GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9","parameters":{"clusterID":"80a8efd7-8ed5-4e53-bc5b-f91c56300e99","csi.storage.k8s.io/pv/name":"pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9","csi.storage.k8s.io/pvc/name":"bytebase-pvc","csi.storage.k8s.io/pvc/namespace":"bytebase","imageFeatures":"layering","pool":"k8s-store"},"secrets":"stripped","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4","mount_flags":["discard"]}},"access_mode":{"mode":1}}]} I0418 12:04:35.713771 1 rbd_util.go:1315] ID: 47 Req-ID: pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9 setting disableInUseChecks: false image features: [layering] mounter: rbd E0418 12:04:35.750268 1 controllerserver.go:232] ID: 47 Req-ID: pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9 failed to connect to volume : failed to get connection: connecting failed: rados: ret=-1, Operation not permitted E0418 12:04:35.750355 1 utils.go:203] ID: 47 Req-ID: pvc-2c763594-8d4f-407c-bbfd-752dcaecb1c9 GRPC error: rpc error: code = Internal desc = failed to get connection: connecting failed: rados: ret=-1, Operation not permitted |
Is there a way you can check the Ceph MON/OSD logs for rejected connection requests? Maybe the problem is not with the credentials, but with the network configuration of the pods/nodes? You can also try the manual commands from within the csi-rbdplugin container of a csi-rbdplugin-provisioner pod. |
Oh,ceph-mon.log find error, but entry client.csi-rbd exist. 2024-04-19 17:25:30.090 7f6b31027700 0 cephx server client.csi-rbd : couldn't find entity name: client.csi-rbd client.csi-rbd ceph auth list | grep client.csi-rbd client.csi-rbd |
sorry i missed it, the adminID should contain only base64 encoding of admin without |
in container sh-4.4# ceph |
Yes, base64 has been modified to not include clinet., the issue is currently being encountered.
|
You need to provide |
Where is it configured? |
|
sh-4.4# rados ls -p=k8s-store --user=csi-rbd --key=AQCo9SBmRGZgMxAA1baxl65ra6D64RQQ+mEnfg== --no-mon-config cephx server client.csi-rbd: unexpected key: req.key=a128c0dd1c3cd379 expected_key=c3c8ea50fbc36e10 |
This is the exact problem, please check with ceph for it, not able to help with debugging on this issue as nothing seems to be wrong with csi. |
@wangchao732 i think you dont need to pass the encrypted key, can you pass key without base64 encoding? it should be the key you will get it from the |
|
It doesn't seem to be the problem, and the command is still reported when executed in the ceph cluster.
[root@Bj13-Ceph01-Dev ceph]# rados ls -p=k8s-store --user=csi-rbd --key=AQCo9SBmRGZgMxAA1baxl65ra6D64RQQ+mEnfg== --no-mon-config |
The keyning connection is configured in ceph.conf and the connection is successful. sh-4.4# rados ls - -p k8s-store --keyring /etc/ceph/keyring --name client.csi-rbd |
@Madhu-1 Can you help me take a look? |
the above looks good, can you try to create PVC now, and also paste the kubectl yaml output of cephfs and rbd secrets? |
The problem remains, failed to get connection: connecting failed: rados: ret=-1, Operation not permitted. #kubectl get configmap ceph-config -n ceph-csi-rbd -o yaml kubectl get configmap ceph-csi-config -n ceph-csi-rbd -o yamlapiVersion: v1 |
$echo QVFBL3RTaG03WGtySHhBQWV4M2xVdkFGMEtkeE1aQkMxZEd1SWc9PQo=|base64 -d echo Y3NpLXJiZAo=|base64 -d is this the right key for csi-rbd user? |
kubernetes: 1.22.2 to create a secrets, you must do base64, otherwise a failed error decoding from json: illegal base64 data at input byte 3 will be created. |
Yes, I rebuilt it later. |
can you give permission to the mgr or use csi-rbd-provisioner user in secret and see if it works? |
That key is expected to work as its working on other cluster, I am out of the idea sorry |
It's okay, I'm also puzzled, guessing that it may be an exception when getting userid/userkey to connect to the cluster when calling the ceph api, but I don't have any proof for the time being. |
cephcsi: v3.11.0
csi-provisioner: v4.0.0
kubernetes : v1.22.12
[csi-cephfs-secret]
adminID : client.admin(base64)
adminKey: xxx(base64)
ceph fs ls
name: cephfs, metadata pool: store_metadata, data pools: [store-file ]
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-cephfs-sc
labels:
app: ceph-csi-cephfs
app.kubernetes.io/managed-by: Helm
chart: ceph-csi-cephfs-3-canary
heritage: Helm
release: ceph-csi-cephfs
annotations:
kubesphere.io/creator: admin
meta.helm.sh/release-name: ceph-csi-cephfs
meta.helm.sh/release-namespace: ceph-csi-cephfs
storageclass.kubesphere.io/allow-clone: 'true'
storageclass.kubesphere.io/allow-snapshot: 'true'
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: xxx
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
fsName: cephfs
pool: store-file
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
ConfigMap
apiVersion: v1
data:
config.json: '[{"clusterID": "80a8efd7-8ed5-4e53-bc5b-xxxx","monitors":
["192.168.13.180:6789","192.168.13.181:6789","192.168.13.182:6789"]}]'
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: ceph-csi-cephfs
meta.helm.sh/release-namespace: ceph-csi-cephfs
creationTimestamp: "2024-04-17T02:58:01Z"
labels:
app: ceph-csi-cephfs
app.kubernetes.io/managed-by: Helm
chart: ceph-csi-cephfs-3.11.0
component: provisioner
heritage: Helm
release: ceph-csi-cephfs
name: ceph-csi-config
namespace: ceph-csi-cephfs
resourceVersion: "100022091"
selfLink: /api/v1/namespaces/ceph-csi-cephfs/configmaps/ceph-csi-config
uid: 40ae7717-c85a-44eb-b0a1-3652b3d4dfe0
create pvc error
Name:"bytebase-pvc", UID:"91dc9df3-e611-44e5-8191-b18841edabf1", APIVersion:"v1", ResourceVersion:"100120810", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = InvalidArgument desc = failed to get connection: connecting failed: rados: ret=-1, Operation not permitted
The text was updated successfully, but these errors were encountered: