-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to Mount volume at Pod #51
Comments
I just checked: sudo systemctl show --property=MountFlags docker.service which returns no config has been set for MountFlags, could my issue be here? And if so, who to change this? MountFlags= |
I tried the following:
Seems to have no effect so far. |
I don't think the cause here is the docker MountFlags, that would result in a different error. Even though your DNS resolution seems to work in the provisioner pod, I think the biggest indication here is that you get a DNS error in the mounter (which is running in a different Pod). Can you try to configure the endpoint with the service IP instead of the DNS name, just to see if that works? So we can really rule out the DNS issue. |
Hello again, thx for your quick reply, this is the original URL: This is the IP from the K8s CIDR: Result: Works like a charm! But why do I get DNS resolution error just here? I would expect that I can resolve the IP behind the DNS name like any other internal K8s service. Many thanks in advance |
Really hard to say, the driver should not be messing with DNS. Can you exec into one of the |
Same to me xD. No clue where to look at. I tried both rclone and s3fs, absolutely with the same issue and behavior. Thx so far for your support :) |
Hi! kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-s3
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-s3
template:
metadata:
labels:
app: csi-s3
spec:
hostNetwork: true
... It seems when HostNetwork is enabled, we should include also the dnsPolicy "ClusterFirstWithHostNet". So we can access local cluster services together with external services. Although, I'm not sure why the daemonset is configured with hostNetwork=true.... So, the Daemonset definition should be: kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-s3
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-s3
template:
metadata:
labels:
app: csi-s3
spec:
hostNetwork: true
dnsPolicy: "ClusterFirstWithHostNet"
... More info
Check #76 |
I had the same problem, I looked at the logs of the kubectl -n kube-system set image statefulset/csi-attacher-s3 csi-attacher=quay.io/k8scsi/csi-attacher:canary Next I got a permission error: `v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-attacher-sa" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope. I tried to modify the role bindings but I couldn't find the right combinations so I ended up giving the csi-attacher-sa service account cluster-admin privileges as shown below: kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-all
subjects:
- kind: ServiceAccount
name: csi-attacher-sa
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin |
Hey folks,
maybe someone can give me a hint around here. For testing proposes I use minio as S3 provider, creating and attaching a PVC is working fine but I'm unable to mount the volume at a given Pod:
I'm aware that the error says that the host is not resolvable but the funny fact is that I'm able to reach the url "filelake.kube-system.svc.cluster.local" from every Pod on my cluster and DNS resolution seems to work as expected ...
Looking at the persistentvolumeclaim itself seems also fine to me
What could be the cause of this issue as all logs seems to be fine, a bucket also gets provisioned at minio. Everything seems to work fine except the actual mount on Pod side.
Thanks in advance :D
The text was updated successfully, but these errors were encountered: