-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream blob data in chunks to files to not occupy too much memory #551
Conversation
* development: Blobs directory configuration (#549)
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## development #551 +/- ##
===============================================
+ Coverage 92.50% 92.54% +0.04%
===============================================
Files 105 105
Lines 17932 17988 +56
===============================================
+ Hits 16588 16647 +59
+ Misses 1344 1341 -3
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really good, looking forward to exploring optimal configurations for managing memory use with this in place.
* development: Make sure `/tmp` directory does not run out of scope before application ends (#557) Integrate `Bytes` value (#554) Stream blob data in chunks to files to not occupy too much memory (#551) Blobs directory configuration (#549) Use correct MAX_BLOB_PIECE_LENGTH from p2panda_rs Build a byte buffer over paginated pieces when assembling blobs (#547) HTTP routes to serve files with correct content type and etag headers (#544) Task for automatic garbage collection of unused documents and views (#500) Refactor tmp blob dir creation after rebase Fix after rebase "blob" materializer task (#493) Add static file server to `http` service (#483) Enable deletion of dangling `document_views` and related `document_view_fields` from db (#491) BlobStore for retrieving raw blob data from the db (#484)
Establishes a stream from reading binary blob data from the database (via pagination) into writing the file on the file system. This allows for (parallely) handling very large blobs without occupying too much of memory.
Closes: #548
📋 Checklist
CHANGELOG.md