Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] The zfs binary not found #641

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

jmauro
Copy link

@jmauro jmauro commented Apr 22, 2021

Hello,

I am using a vanilla Debian testing distribution and trying to sync dataset between servers with different none root user. When trying to do it here is the problem:

root@XXXX:~# /data/repository/sanoid/syncoid --debug  --no-privilege-elevation  --sshkey=.ssh/backuser data_pool/data [email protected]:zfs_pool/backup/homeland
DEBUG: SSHCMD: ssh    -i .ssh/backuser
DEBUG: checking availability of lzop on source...
DEBUG: checking availability of lzop on target...
DEBUG: checking availability of lzop on local machine...
DEBUG: checking availability of mbuffer on source...
DEBUG: checking availability of mbuffer on target...
DEBUG: checking availability of pv on local machine...
DEBUG: checking availability of zfs resume feature on source...
DEBUG: checking availability of zfs resume feature on target...
WARN: ZFS resume feature not available on target machine - sync will continue without resume support.
DEBUG: syncing source data_pool/data to target zfs_pool/backup/homeland.
DEBUG: getting current value of syncoid:sync on data_pool/data...
  zfs get -H syncoid:sync 'data_pool/data'
DEBUG: checking to see if zfs_pool/backup/homeland on ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] is already in zfs receive using ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] ps -Ao args= ...
DEBUG: checking to see if target filesystem exists using "ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected]  zfs get -H name ''"'"'zfs_pool/backup/homeland'"'"'' 2>&1 |"...
DEBUG: getting list of snapshots on data_pool/data using   zfs get -Hpd 1 -t snapshot guid,creation 'data_pool/data' |...
DEBUG: creating sync snapshot using "  zfs snapshot 'data_pool/data'@syncoid_XXXX_2021-04-22:13:08:41-GMT00:00
"...
DEBUG: target zfs_pool/backup/homeland does not exist.  Finding oldest available snapshot on source data_pool/data ...
DEBUG: getting estimated transfer size from source  using "  zfs send  -nvP 'data_pool/data@syncoid_XXXX_2021-04-20:23:10:22' 2>&1 |"...
DEBUG: sendsize = 90135876864
INFO: Sending oldest full snapshot data_pool/data@syncoid_XXXX_2021-04-20:23:10:22 (~ 83.9 GB) to new target filesystem:
DEBUG:  zfs send  'data_pool/data'@'syncoid_XXXX_2021-04-20:23:10:22' | pv -p -t -e -r -b -s 90135876864 | lzop  | mbuffer  -q -s 128k -m 16M 2>/dev/null | ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] ' mbuffer  -q -s 128k -m 16M 2>/dev/null | lzop -dfc |  zfs receive   -F '"'"'zfs_pool/backup/homeland'"'"''
DEBUG: checking to see if zfs_pool/backup/homeland on ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] is already in zfs receive using ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] ps -Ao args= ...
bash: line 1: zfs: command not found
mbuffer: error: outputThread: error writing to <stdout> at offset 0x218000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
23.8MiB 0:00:00 [40.3MiB/s] [>                                                                                                                                                         ]  0%            CRITICAL ERROR:  zfs send  'data_pool/data'@'syncoid_XXXX_2021-04-20:23:10:22' | pv -p -t -e -r -b -s 90135876864 | lzop  | mbuffer  -q -s 128k -m 16M 2>/dev/null | ssh    -i .ssh/backuser -S /tmp/[email protected] [email protected] ' mbuffer  -q -s 128k -m 16M 2>/dev/null | lzop -dfc |  zfs receive   -F '"'"'zfs_pool/backup/homeland'"'"'' failed: 32512 at /data/repository/sanoid/syncoid line 496.

And here is the environment I am using:

root@hyoga:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux bullseye/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@hyoga:~# dpkg -l sanoid
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-==========================================================
ii  sanoid         2.0.3-4      all          Policy-driven ZFS snapshot management and replication tool

STATE:
I am using sanoid with debian 'bullseye/sid',
while trying to sync some ZFS dataset between servers
with a usual user, it seems that some zfs binaries
are not found on the remote server.

FIX:
The new function 'getbinarypath' is defined to get
a full path when using 'zfs' binaries on localhost
as well as on the remotehost.

Signed-off-by: Jeremy MAURO <[email protected]>
- Create new variable with full path for zfs binaries
  for the local and the remote host.
- Update 'getchilddatasets' to use the binary full path
- Update 'getzfsvalue' to use the binary full path
- Update 'getsnaps' to use the binary full path
- Update 'getbookmarks' to use the binary full path
- Update 'getsnapsfallback' to use the binary full path
- Update 'newsyncsnap' to use the binary full path
- Update 'pruneoldsyncsnaps' to use the binary full path
- Update 'resetreceivestate' to use the binary full path
- Update 'setzfsvalue' to use the binary full path
- Update 'targetexists' to use the binary full path

Signed-off-by: Jeremy MAURO <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant