-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
znapzend doesn't ship snapshots to remote DST, if a clone is present on the DST dataset #116
Comments
to be honest, we haven't paid any attention on clones, as this has not been a requirement, yet. hope to find some time to have a look at it. you can try znapzend with |
You meant to pass --features=oracleMode in the startup skript? I'll give it a spin… |
Bummer, this didn't work… unfortuanetly. Seems, that I will have to work around this issue. |
hmm you can always do a the output will show you all the invoked zfs commands. you can execute them manually to see where it fails. do you run znapzend with log level debug? what does the log tell you about it? |
I think that the default debug level is set to debug, no? Anyway, I have just fired up a znapzend from the terminal: root@nfsvmpool01:/opt/VSM/InstallAgent# znapzend --debug --noaction --runonce=sasTank zfs snapshot sasTank/nfsvmpool01sas@2014-12-16-220422[Tue Dec 16 22:04:22 2014] [debug] snapshot worker for sasTank/nfsvmpool01sas done (2851) zfs list -H -o name -t snapshot -s creation -d 1 sasTank/nfsvmpool01sasssh -o Compression=yes -o CompressionLevel=1 -o Cipher=arcfour -o batchMode=yes -o ConnectTimeout=30 [email protected] zfs list -H -o name -t snapshot -s creation -d 1 sasTank/nfsvmpool01sasssh -o Compression=yes -o CompressionLevel=1 -o Cipher=arcfour -o batchMode=yes -o ConnectTimeout=30 [email protected] zfs list -H -o name -t snapshot -s creation -d 1 sasTank/nfsvmpool01sas[Tue Dec 16 22:04:23 2014] [debug] cleaning up snapshots on [email protected]:sasTank/nfsvmpool01sas zfs list -H -o name -t snapshot -s creation -d 1 sasTank/nfsvmpool01sas[Tue Dec 16 22:04:23 2014] [debug] cleaning up snapshots on sasTank/nfsvmpool01sas zfs destroy sasTank/nfsvmpool01sas@2014-12-15-200000[Tue Dec 16 22:04:23 2014] [info] done with backupset sasTank/nfsvmpool01sas in 1 seconds I can't actually see, which zfs command causes this issue, as none get's output to the terminal. The zfs list commands do not cause any issues on either side. |
did you run the commands above manually? alternatively you can redo a run w/o noaction and see where it fails. w/o noaction you'll get error messages from zfs if something goes wrong... |
Well… it seems that the receiver reports that the snapshot that the sender wants to ship, actually already exists… which is not the case, however: root@nfsvmpool01:/opt/VSM/InstallAgent# zfs list -r sasTank On the destination: |
zfs recv will fail if snapshot(s) exist on the destination which do not exist on the source. i don't know if this only happens when recv is used with the
if that does not work either there is nothing znapzend can do about since this is a limitation of zfs. |
I digged a bit into my own backup setup, where I do have interwoven snapshots, where some of them are created locally and the updates are actually shipped via ssh from a remote source and while I was looking at my old code, I remmbered, having the same issue. zfs recv -F would rollback the destination dataset, which I didn't want and just sending incremental snaps alog with locally created ones, led to the erorr that the destination dataset had been modified and thus the incremental snapshot could not be applied. I will give that a try in the afternoon, but I am quite confident, that this will do it for me. All I don't know now, is how to get the -F out of the ZFS.pm in a way, that I can test this using znapzend. Maybe you can help me out with that and I take it for a ride… |
this should do the trick: hadfl@14cbaeb |
Thanks, I'll give it a try and report back. |
I just tried without -F and on a write-protected dataset… this way, znapsend keeps on shipping its snapshots, while other snapshots may exist at the same time. If the destination dataset is not set to readonly, then ZFS would report that the destination dataset has been modified since the last (z|s)napshot. I'd suggest to keep -F when transferring the initial snapshot to the destination, but omit it - or make if configurable - on subsequent zfs send/recvs. That way, you are able to pull of regular file-based backups from a snapshot. However, creating a clone does to break this again, but maybe I can get away with a snapshot, thot doesn't change its name. Will keep you posted. |
This works, when performed with a little caution: one must make sure, that the "off-side" snapshot doesn't get destroyed prior to znapzend shipping the next one, because if the "off-side" snapshot gets destroyed, the former snapshot gets altered and if znapzend relies on that one, zfs recv will yield the "dataset has been modified" error and one would have to rollback the destination to the latest znapzend-snapshot. Howerver, this will work for me, as I am able to make sure, that there is always at least one "znapshot" between the refreshes of my "backup snapshots" and without -F znapzend will happily ship all it's incremental snaps to the destination. |
I have verified that the setup I worked out, works when I run znapzend manuall from the terminal, but when I waited for the next scheduled run, I noticed that the two datasets in question, were not updated. I then re-checked with two manually run znapzend which both worked as expected. I have now restartet znapzend via svcadm, but I am wondering how that could be. Is a deamonized znapzend not picking up the changes I made to ZFS.pm? |
ZFS.pm only gets reloaded when you restart znapzend |
Indeed… ;) But now, that that had been done, it's working as expected: admin@nfsvmpool02:/export/home/admin$ zfs list -r sasTank But as you can see, znapzend is now happily sending it's snapshots, without being messed up by the locally generated snapshot. |
i think the now that we know, it can, we could omit the the fix should also cover proper handling of foreing snapshots on the source side. as znapzend always uses there will be a fix, i just can't promise when that will happen... |
I think the -F should be sent, when initially trying to ship a snapshot, especially, if you are trying to recreate exactly the same dataset structue. It's the same with my own scripts, where initial snapshots had to be sent using -F, as otherwise ZFS would not create those on an existing zpool. znapzend is now humming away just nicely and it is giving me the advantage of using mbuffer which was something I backed off from, when I created my own solution. |
Any progress with this bug? One of my backup servers is located on a network link which sometimes fails, and unfortunately it seems to prevent znapzend from working properly :( |
I have a laptop which can't always talk to the backup server. I wish znapzend could somehow automatically do something other than just failing here. |
It seems, that znapzend tries to clear our all other snapshots on the destination, which doesn't "belong" to its backup plan. The issue is, that I am using a clone to back up files from the NFS which take considerably longer than than the interval is set up to. I only found that out through trial and error, though.
If a clone is present on the destination the removal of the underlying snapshot is prevented and trying to remove it, will result in an error from zfs destroy. However, znapzend then bails out and even doesen't attempt to continue with is actions according to it's backup plan.
What is the reasonable behind this? I don't see, where an existing clone would interfere with znapzend's backup in any way.
Having the clone on the source also doesn't do any good since, znapzend seems not capable of determining that the the dataset it's about to chew on is actually a clone and thus starts to perform a complete zfs send/recv to the destination.
-Stephan
The text was updated successfully, but these errors were encountered: