You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 6, 2022. It is now read-only.
AFAICT, via random-access-file, hypercore opens a file-descriptor for every instance, and doesn't close the FD until .close() is called. We definitely don't want to leak those.
Any time hypercore-archiver's get() method is called (eg in archiver-api's archiveProgress() method) a hypercore instance is created. Thus, another FD is created, and not cleaned up.
Two thoughts on that:
We need to be sure to clean up all hypercores by .close()ing them
Should we be pooling hypercore instances instead of creating them with .get() every time?
The text was updated successfully, but these errors were encountered:
Should we be pooling hypercore instances instead of creating them with .get() every time?
Ya, we could definitely optimize the get to not create them every time. Seems like a LRU cache would be the most straightforward way to implement that.
garbados
pushed a commit
to garbados/hypercloud
that referenced
this issue
Aug 14, 2017
AFAICT, via random-access-file, hypercore opens a file-descriptor for every instance, and doesn't close the FD until .close() is called. We definitely don't want to leak those.
Any time hypercore-archiver's get() method is called (eg in archiver-api's archiveProgress() method) a hypercore instance is created. Thus, another FD is created, and not cleaned up.
Two thoughts on that:
The text was updated successfully, but these errors were encountered: