Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LRU ("garbage collection") for interned values #597

Open
nikomatsakis opened this issue Oct 16, 2024 · 6 comments
Open

LRU ("garbage collection") for interned values #597

nikomatsakis opened this issue Oct 16, 2024 · 6 comments
Labels
good first issue Good for newcomers
Milestone

Comments

@nikomatsakis
Copy link
Member

As described in this hackmd, we would like to support LRU-style collection for interned values so that we can cap maximum memory consumption.

@nikomatsakis nikomatsakis added the good first issue Good for newcomers label Oct 16, 2024
@nikomatsakis nikomatsakis added this to the Salsa 3.0 milestone Oct 16, 2024
@nikomatsakis
Copy link
Member Author

nikomatsakis commented Oct 16, 2024

Mentoring instructions

Here are some of the steps required

  • Remove the reset_at field and reset method. These were used to record when the intern table was completed emptied. I don't think it's useful and nobody uses it, so let's just remove that capability.
    • reset_at was also used in maybe_changed_after; for now, we can just make this method return false, since interned data never changes. We'll revisit this later on.
  • Today a call to intern records a dependency index on the entire intern ingredient
    • This concept (a dependency on the entire table) will be removed entirely when we are done. That can be a nice follow-up refactoring, since we can delete the distinction between a DatabaseKeyIndex (a dependency on a specific id) and DependencyIndex (might depend on the ingredient as a whole).
    • We want to modify this to record a read of the specific interned id:
      • So we have to move that call to report_tracked_read down to the places where the fn returns, e.g., here, here, and here. We will report a tracked read of DependencyIndex { ingredient_index: self.ingredient, key_index: Some(id) } instead of the for_table method (which should be deadcode I believe).
  • Optional: refactor to remove DependencyIndex
    • every use of it can be replaced with DatabaseKeyIndex
    • the various methods in Ingredient that take a Option<Id> can be changed to take an Id and various unwrap calls removed from the impls
      • e.g. maybe_changed_after
  • Modify the interned Value struct to record the revision when data was interned:
    • The Value struct is the data that we store in the table for each interned value.
    • It should have two new fields:
      • first_interned_at: Revision, the revision in which the value was first interned (which never changes)
      • last_interned_at: AtomicRevision, the revision in which it was most recently interned (which is updated each time we intern)
    • We will use the first_interned_at field later on to detect whether an intern id has been re-used.
    • We will use the last_interned_at field later on to detect whether its safe to discard an interned value -- we can NEVER discard values that have already been interned in the current revision, only values that were interned in previous revisions.
    • We will have to initialize/update those new fields instances each time we intern:
      • When interning, if a value is already present (here or here) we should update last_interned_at to the current revision and then return the id.
        • We'll have to fetch the Value struct from the table by a call to db.zalsa().table().get::<Value<C>>(id), like we do here.
      • If we are interning for the first time (here), we set the two revision fields to the current revision.
    • In maybe_changed_after, we have to do the following:
      • Assert the input is Some (if you didn't do the previous refactoring)
      • Lookup the Value from the table with code like this
        • Check whether first_interned_at is greater than the revision:
          • if so, this slot has been reused, so return true (it did change)
          • if not, update last_interned_at to the currrent revision, accessible from db.zalsa().current_revision().
            • This is maybe surprising! The idea is that when we are checking dependencies, if we assert that the slot is valid in this revision, we have effectively "re-interned it", because we have told the caller that they are allowed to re-use this same id and expect the same results.
    • When reading from interned values here, we can assert that the value has been interned at least once in this revision by checking last_interned_at. If this assertion fails, it represents a logic error.
  • Add some function for manually clearing old values to enable testing
    • actually we could add back reset, that should be valid to do, if invoked with an &mut self and triggering a new revision. That could be useful for testing.
    • in general you should should NOT allow a value to be cleared if last_interned_at equals the new revision; but if you take an &mut self you can make it trigger a new revision.
  • Scenarios to test:
    • Tracked function F that reads an input I1 interns a value I2 and returns it.
      • Invoke F in revision R1.
      • Modify I1.
      • Clear I2.
      • Invoke F in revision R2.
      • Log should show that I2 is "re-interned".
    • Tracked function F1 that invokes F2. F2 interns a value I2 and returns it. F1 reads the interned data along with an input I1 and returns something.
      • Invoke F1 in revision R1, which will invoke F2 and intern I2.
      • Modify input I1.
      • Invoke F1 in revision R2. F2 did not read from I1 and so its result should be valid to re-use. Check that no re-interning occured and no panics occur in F1 (e.g., because the last_interned_at field has not been properly updated).
    • As above but:
      • Invoke F in revision R1, which will invoke F2 and intern I2.
      • Modify input I1 and clear I2.
      • Invoke F in revision R2. The interned value will have to be recomputed, so F2 should re-execute, even though no inputs have chagned.

@MichaReiser
Copy link
Contributor

@ibraheemdev are you working on this or parts of it?

@ShoyuVanilla
Copy link
Contributor

This seems interesting. I'd like to work on this unless other people is working on this

@ShoyuVanilla
Copy link
Contributor

ShoyuVanilla commented Oct 16, 2024

BTW, I've read some recent papers about cache eviction algorithms saying that they are more efficient than LRU

Though GC might not happen so many times in salsa's real world use cases and the efficient varies by using pattern, I think that it might be worthwhile to implement multiple eviction policies and open them as runtime options because they implementations are quite simple

@ibraheemdev
Copy link
Contributor

I have been working on the first part of this described in the hackmd.

@xiyuzhai
Copy link

xiyuzhai commented Nov 26, 2024

I'm wondering how necessary is it. If we don't need to throw away any interned values, we can simply store them in a static pool. Not even needing a salsa db.

I'm thinking that immortal interned values are inherently independent from salsa's incremental computation capabilities.

Maybe a separate crate for these things. There are definitely scenarios where we don't care about the memory consumed by a certain class of interned values. And salsa's gc interned value might deserve a different name.

Separating these things shall make salsa design much easier, I think.

With that being said, it probably has to do with my own implementation of things. For my own programming language, I split types into ethereal types and fluffy types. Fluffy types are those that changes rapidly, for instance, those with some local lifetimes. But ethereal types are those like 'Vec' that are just stable. For my own case, I store fluffy types in a local arena tied to a specific region of code, which gc'ed automatically with the tracked function. And anything related to ethereal types are immortal, interned. However, if one use TypeId for both cases, then maybe one could worry about memory consumption issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

5 participants