Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can not mount zfs dataset from encrypted zpools due to read-only state after node_failure chaos #300

Closed
w3aman opened this issue Apr 1, 2021 · 2 comments
Milestone

Comments

@w3aman
Copy link
Contributor

w3aman commented Apr 1, 2021

What steps did you take and what happened:
(k8s 1.20, ubuntu 20.04, zfs-driver:ci image)
i created one encryoted-zpool on node with this command (created manually on node)
sudo zpool create -O encryption=on -O keyformat=passphrase -O keylocation=prompt zfs-test-pool /dev/sdb.
Now one application is using dataset on this pool. If we power off the vm for some time (say 5 min, more than default pod eviction timeout) after that power on the node.
Now pod is stucking in containercreating with saying.

Events:
  Type     Reason       Age   From               Message
  ----     ------       ----  ----               -------
  Normal   Scheduled    74s   default-scheduler  Successfully assigned default/bb-enc-786ccfb7f7-m922g to lvm-node1
  Warning  FailedMount  42s   kubelet            MountVolume.SetUp failed for volume "pvc-249cd9e1-27c0-43ef-bcfe-049ed0fcd213" : rpc error: code = Internal desc = rpc error: code = Internal desc = dataset: mount failed err : mount: /var/lib/kubelet/pods/99c0286d-35a0-4f01-9e30-e824a336df80/volumes/kubernetes.io~csi/pvc-249cd9e1-27c0-43ef-bcfe-049ed0fcd213/mount: cannot mount zfs-test-pool/pvc-249cd9e1-27c0-43ef-bcfe-049ed0fcd213 read-only.
@almereyda
Copy link

almereyda commented Apr 5, 2021

After restarting the node, did you also replicate the manual intervention neccessary for having encrypted ZFS, and load the key's passphrase into the dataset?

In my understanding, following this, the deployment should occur eventually.

Alternatively one could mobilise tang and clevis to provide keys needed for automatic unlocking. Ref.:

@w3aman
Copy link
Contributor Author

w3aman commented Apr 6, 2021

Yes, it worked. Thanks @almereyda
when the node was powered off keystatus became unavailable.

k8s@node3:~$ zfs get keystatus
NAME                                                    PROPERTY   VALUE        SOURCE
zfs-test-pool                                           keystatus  unavailable  -
zfs-test-pool/pvc-64a095a1-4199-4d26-8719-da5d3374933b  keystatus  unavailable  -

Firing the following command reloaded the keys. Zpool at root was encrypted here.

zfs load-key -L prompt zfs-test-pool
k8s@node3:~$ zfs get keystatus
NAME                                                    PROPERTY   VALUE        SOURCE
zfs-test-pool                                           keystatus  available    -
zfs-test-pool/pvc-64a095a1-4199-4d26-8719-da5d3374933b  keystatus  available    -

@pawanpraka1 pawanpraka1 added this to the v1.6.0 milestone Apr 6, 2021
@w3aman w3aman closed this as completed Apr 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants