You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When handling large database dump files (hundreds of megabytes or gigabytes), the kando push command's memory usage increases linearly, eventually leading to the process being killed by the OOM killer in case it reaches its limit or there's no more available memory on the host.
To Reproduce
Use the MongoDB example blueprint (v1) on a MongoDB database with at least 1 GB in collection size.
Monitor the execution of
mongodump | kando location push -
The kando process starts with approximately 60 MB of RSS memory.
When the dump reaches 100 MiB, the RSS memory usage grows to 280 MB.
When the dump reaches 300 MiB, the RSS memory usage grows to 870 MB.
When the dump reaches 500 MiB, the RSS memory usage grows to 1400 MB.
and so on. The pipe is monitored with the pv tool.
By comparison, using MinIO's mc client instead of kando, it reaches 600MB and stays there, regardless of the dump size.
Expected behavior
Memory usage should remain constant throughout the execution of the upload, or there should be an option to limit the maximum memory usage.
The text was updated successfully, but these errors were encountered:
Thanks for opening this issue 👍. The team will review it shortly.
If this is a bug report, make sure to include clear instructions how on to reproduce the problem with minimal reproducible examples, where possible. If this is a security report, please review our security policy as outlined in SECURITY.md.
If you haven't already, please take a moment to review our project's Code of Conduct document.
Thank you for raising this issue!
We believe it's happening due to the underlying stow package we use to interact with S3. One possible solution is to switch to using minio-go library. We'll add this to the roadmap and address it soon.
Describe the bug
When handling large database dump files (hundreds of megabytes or gigabytes), the kando push command's memory usage increases linearly, eventually leading to the process being killed by the OOM killer in case it reaches its limit or there's no more available memory on the host.
To Reproduce
Use the MongoDB example blueprint (v1) on a MongoDB database with at least 1 GB in collection size.
Monitor the execution of
and so on. The pipe is monitored with the pv tool.
By comparison, using MinIO's mc client instead of kando, it reaches 600MB and stays there, regardless of the dump size.
Expected behavior
Memory usage should remain constant throughout the execution of the upload, or there should be an option to limit the maximum memory usage.
The text was updated successfully, but these errors were encountered: