-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hangs on sftp transfer #14
Comments
Here is my sample backup_test.rb safe do local :path => "/backup/:kind/:id" sftp do host "192.168.50.239" user "foo" password "bar123" end keep do local 20 sftp 20 end pgdump do options "-i -x -O" # -i => ignore version, -x => do not dump privileges (grant/revoke), -O => skip restoration of$ user "rails_app" password "railsr0x" database :rails_app end end |
hmm. not sure really, since I never used the sftp part :) it was contributed by a user. lets start with running it with -v switch to make it print what it does ... |
well the -v helped... getting somewhere... so its getting stuck on a begin rescue loop in lib/astrails/safe/sftp.rb lines 26-37. On line 37, there is the retry so it keeps retrying and never stops. So it looks like its failing on the begin on this on line 27 sftp.upload! @backup.path, full_path wonder why its doing that |
vitaly are you able to recreate this problem? |
vitaly... AHA! I think I figured it out. It is a bug in the person who added the net::sftp functionality to the astrails-safe. Can you point me to how I could patch it? Should I fork you repository here on github and then make my change and then commit it and then you can merge it to the master? I'm very excited, this will be my first gem contribution if this does happen. Please advise |
hey, thanks for tracking it down. yes, you just fork it on github and then submit a pull request. I'll take care of the rest. |
I contributed the original piece of this code, so this has obviously peaked my interest. I see a potential infinite loop when calling retry after the rescue, and I'm curious what the conditions are when this happened. Hopefully you can include them in this thread. The obvious fix would be to only retry a set number of times, but I'll check out the patch when it comes around. |
@adam @adam and @vitaly sftp.upload! @backup.path, full_path The problem is net/sftp, actually does NOT allow you to upload files not in the current sftp user's home directory. What astrails-safe here is trying to do is upload the backup direcotry which exists in the root directory by default (i.e. '/backup/pgdump'). Essentially if you change the local path in your astrails-safe file (i.e. backup_test.rb) to something like this: local :path => "/backup/:kind/:id", you should not encounter the same problem again. I haven't tried this out yet but i think that should do it. However, a couple of things can be done here. First, I agree with @adam12 that the number of retries in the rescue statement needs to be limited to a finite amount so we don't have this infinite rescue loop going. The second, what we could do, is parse the local path in the lib/astrails/safe/sftp.rb so that it sftps in the HOME directory of the sftp user instead of the root directory. What do you guys think? Oh, and I spent a couple of days looking into 'net/sftp' gem/lib, and basically it hasn't been updated in a while and by 'jamis' and he is also no longer the maintainer. I tried for a long while trying to figure out if there is a way to change directory in the last version of 'net/sftp' before you upload but could not find one. So what do you guys think would be a better way to go now? By the way I would like to make this change and contribute it :) |
just to let you guys know, I just tried this for the local paht: local :path => "/backup/:kind/:id" This works. Of course, this would store backups in the HOME directory of both the machine making the backups and the remote machine. I don't mind having the remote machine store backups in the HOME directory of the remote user, but on the machine making the backups it would make sense to keep it in the root directory. I guess this goes back to my earlier suggestion, to parse the full_path in line 27 to remove the root directory location '/' in the path and ensure that the backups path for the remote directory is NOT the root directory. Let me know what you guys think. Again I've already forked the project, once I hear from you guys I will make the change in my fork and let you know. And I hope you incorporate the change... it would be my first gem contribution :) |
I think removing the leading '/' from a path is only going to be a band-air. From what I can gather after running a test using your configuration, is that the infinite loop is caused by a permissions problem. safe is rescuing the failed upload (due to the path not existing) and then attempting to walk the path recursively, creating each folder. What's happening is the first folder it attempts to create fails (which is probably because 'foo' UID != 0). Currently we rescue from mkdir! (as it will raise an exception if the folder already existed) and moves on to the next part of the path. Of course, this repeats because the original mkdir! failed, and we just keep repeating the cycle for infinity. Removing the leading '/' will ensure that the upload will succeed (since this will default to $HOME), but will prohibit anybody from storing remote backups anywhere outside of their $HOME. I've pushed an update out to my fork, but I encourage you to make a fix in your way and have Vitaly pull your changes. |
@adam12, does the patch on your branch now allow you to put backups in any remote directory? |
Any directory that the user has permissions to write to, yes. |
hey guys, I see you having fun :). sorry for not participating but Im in the middle of some RL stuff. but I see you manage pretty well anyway :) |
@adam12, ah if you're patch works then I think this should resolve the issue. Just a quick note, what machine did you try to sftp too? See I was trying sftp to a Ubuntu (debian based machine) with a sudoer user who does have access to read/write any file on the system. I'm assuming this is not an issue but just thought I'd throw that out there. @adam12, honestly since your solution/patch allows a user to put backups pretty much anywahere (provided they have access to that destination) that is exactly what I was looknig for. Liek you said, my approach was a band-aid which I thought would solve the problem. @vitaly, can you merge @adam12's patch to the main astrails-safe branch and release a new gem? I would love to start using this for my backup solution as soon as the patch is applied. On another note, I got excited that I would finally be contributing code to a gem... I don't think it makes sense in this case since @adam12's pathc is sufficianet and does the job well. Well at least I was able to bring attention to the problem and debugging it some... hope that helped :) |
I'll be unavailable over the next week due to vacation but I'll check in on this issue when I return. |
The fix is committed here: adam12@e17039450f519b19bbcd15476e9428bb7cfa9599 @vitaly Let me know if you want/require a pull request. |
I'm able to create backups with astrails-safe just fine when I run backup locally. However, when I try to backup with sftp, it seems to hang on the sftp part. I have to kill the astrails-safe process manually. I know that it is hanging on the sftp part because when I kill the process, I see that the local copy of the backup was created. Additionally, I've used the same configuration specified in my astrails-safe backup settings to sftp to the other machine in shell, so I know its not a configuration issue. No clue how to debug the sftp process.
The text was updated successfully, but these errors were encountered: