-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read timed out during download sync #720
Comments
The current version of b2sdk has http timeouts set to 15 minutes (since b2_copy_file can sometimes take a long time). I think you should try again :) |
Current version means? Head of the main branch? We're using 2.5.0 r n. |
This setting is in b2-sdk-python, actually. Which version of b2-sdk-python do you have? Even before the timeout was, I think, 10 minutes. Is this issue persistent or did it happen once? Sometimes the server can get stalled maybe. By the way, is this the full stack trace? This exception should be caught by downloader and the download should continue from the place where it broke off. Maybe we are not catching it properly? |
We downloaded the pre-build linux release with It happens when syncing with more than one thread. The affected host is at hetzner (a EX62 with hardware raid 10). Sorry I guess this is the rest of the stack trace (it looked similar thats why I truncated it):
|
I think I know what is causing it, but I need to investigate. Please use one thread for now - in your case it will not actually be a single thread because you have large files which create their own threads. |
I already did that and I can confirm that 1 threads works for my scenario. 👍 |
The way threads are configurable for sync from cloud to local is not ideal. We will change it in the future, but for now you should use 1 or 2 for |
Trying to download around 7TB divided in 13 files, with about 1Gbps, however I had to restart it 3 times already because after a while it crashes with the following exception and ALL downloads are lost:
I just used did
b2 sync --threads 4 b://mybucket/.../ ./
Is it possible to increase the timeout, or just retry? ;-)
The text was updated successfully, but these errors were encountered: