Skip to content
This repository has been archived by the owner on Jan 6, 2022. It is now read-only.

Need to close() and possibly pool hypercore instances #18

Open
pfrazee opened this issue Dec 30, 2016 · 3 comments
Open

Need to close() and possibly pool hypercore instances #18

pfrazee opened this issue Dec 30, 2016 · 3 comments
Labels
Milestone

Comments

@pfrazee
Copy link
Collaborator

pfrazee commented Dec 30, 2016

AFAICT, via random-access-file, hypercore opens a file-descriptor for every instance, and doesn't close the FD until .close() is called. We definitely don't want to leak those.

Any time hypercore-archiver's get() method is called (eg in archiver-api's archiveProgress() method) a hypercore instance is created. Thus, another FD is created, and not cleaned up.

Two thoughts on that:

  1. We need to be sure to clean up all hypercores by .close()ing them
  2. Should we be pooling hypercore instances instead of creating them with .get() every time?
@pfrazee
Copy link
Collaborator Author

pfrazee commented Dec 30, 2016

Alternatively, we could pool random-access-file instances, but in either case we'll still need to .close the hypercores

@pfrazee pfrazee added the fix label Dec 31, 2016
@pfrazee pfrazee modified the milestone: v1 Dec 31, 2016
@pfrazee
Copy link
Collaborator Author

pfrazee commented Jan 4, 2017

Archiver-server has a solution we can consider: an LRU cache.

@joehand
Copy link
Collaborator

joehand commented Jan 4, 2017

Should we be pooling hypercore instances instead of creating them with .get() every time?

Ya, we could definitely optimize the get to not create them every time. Seems like a LRU cache would be the most straightforward way to implement that.

garbados pushed a commit to garbados/hypercloud that referenced this issue Aug 14, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants