Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor: Should we do all mmap via Mmapper? #1239

Open
wks opened this issue Nov 28, 2024 · 1 comment
Open

Refactor: Should we do all mmap via Mmapper? #1239

wks opened this issue Nov 28, 2024 · 1 comment

Comments

@wks
Copy link
Collaborator

wks commented Nov 28, 2024

Currently we have many places that call mmap or its wrappers, including memory::{dzmmap,dzmmap_noreplace,mmap_noreserve}.

  • MapState: MapState::transition_to_* will call those functions directly, as if the MapState can transition its own state without the help of Mmapper or the MmapSupport instance.
  • LockFreeImmortalSpace: It skipped the Mmapper and directly calls dzmmap_noreplace.
  • RawMemoryFreeList: It calls dzmmap_noreplace to map its memory.

Among them, MapState::transition_to_* is called by Mmapper itself as utility functions, while other use cases are bypassing Mmapper for some reasons. Specifically, LockFreeImmortalSpace allocates the entire space eagerly so that we won't need to get chunks from VMMap and mmap on demand. RawMemoryFreeList is also eagerly allocated because the freelist is small even for the 2TB space extent in Map64, and unused portions of the freelist are backed by all-zero pages which will not be brought into physical memory until used.

I can think of two reasons for doing the mmap of LockFreeImmortalSpace and RawMemoryFreeList via Mmapper. But there are alternatives, too.

  1. Mmapper implementations maintain a list of MapState, and is aware whether a given chunk is mapped, quarantined, etc. If one part of the system bypasses Mmapper, another part of MMTk that uses Mmapper may erroneously overwrite the memory range already mapped by another space, metadata, freelist, or any other entity.
    • An alternative is partitioning the memory into ranges that are managed by Mmapper, and ranges that are managed directly by mmap calls or its wrappers. In other words, instead of letting Mmapper handle every mmap, we tell Mmapper not to manage certain memory ranges.
  2. We discussed about introducing an object (per MMTk instance) for storing states or configurations related to mmap. One use case is enabling/disabling mmap annotation using Options. If all mmap goes through Mmapper, we can just put the state in Mmapper, or let Mmapper be the only object that references the object for mmap, simplifying the system design.
    • An alternative is using an Arc<MmapSupport> (assuming MmapSupport is an object and dzmmap* and mmap_noreserve are its instance methods) to share the MmapSupport instance between the Mmapper and some spaces and/or the Map64 which creates RawMemoryFreeList. Of course the precondition for this alternative is telling Mmapper not to manager some memory.
@qinsoon
Copy link
Member

qinsoon commented Nov 28, 2024

LockFreeImmortalSpace should definitely use Mmapper. It is debatable for RawMemoryFreeList but I am in favor that it should use Mmapper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants