Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose a simple trigger prioritization algo for flex event #1112

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions flexible_event_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ _Note: This document describes possible new functionality in the Attribution Rep
- [Goals](#goals)
- [Phase 2: Full Flexible Event-Level](#phase-2-full-flexible-event-level)
- [API changes](#api-changes)
- [Trigger prioritization](#trigger-prioritization)
- [Trigger-data modulus matching example](#trigger-data-modulus-matching-example)
- [Configurations that are equivalent to the current version](#configurations-that-are-equivalent-to-the-current-version)
- [Equivalent event sources](#equivalent-event-sources)
Expand Down Expand Up @@ -169,6 +170,15 @@ When the `event_report_window` for a spec completes, we will map its summary val
"trigger_summary_bucket": [<bucket start>, <bucket end>]
}
```
### Trigger prioritization

Given that triggering attribution can affect a source's state without producing a report, we will need a new algorithm for doing trigger prioritization. Here is a sketch of how it could work:

1. Always maintain a sorted list of triggers, sorted in order of priority (descending) and trigger time (ascending)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always maintain a sorted list of triggers that are not fully consumed in sent report , sorted in order of priority (descending), then trigger time (ascending).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "fully consumed" mean here?

Copy link
Collaborator

@giladbarkan-github giladbarkan-github Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trigger value was fully applied in the creation of a report whose report time is earlier than the trigger time (for "count" operator, the value is 1).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Current trigger time, so report time less than current window start).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the context of this PR, I don't understand how a trigger could be partially consumed. Not that this PR goes into the details of summary buckets, but in step 2.1 is the trigger's entire value not added to the spec's current value, resulting in N reports depending on the summary buckets?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanzhangnyc I think you're describing an implementation, not an explainer-level algorithm description, particularly with the idea that part of a trigger's value is left over for another report in your third item.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't deleting all triggers associated with the spec + window sufficient?

Let's say we consumed 10 of a trigger's value of 12. If we erase this trigger, we have no way of knowing the priority of the remaining value of 2 as it applies to the procedure of generating reports in the next window.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't deleting all triggers associated with the spec + window sufficient?

Let's say we consumed 10 of a trigger's value of 12. If we erase this trigger, we have no way of knowing the priority of the remaining value of 2 as it applies to the procedure of generating reports in the next window.

That is an implementation detail not relevant to the explainer-level description here and is covered by "update the source's state based on all of the triggers that were successfully applied."

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, then if nothing else, this shows how hard it is for me to understand the current proposal.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "update the source's state based on all of the triggers that were successfully applied" mean?

2. Whenver a spec's window's end time is hit (breaking ties arbitrarily)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a new trigger "matching" this source, we will perform the following process. Not at end of a reporting window

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this matter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I intentionally described this per reporting window to minimize the amount of state / accounting. I think the end result should be the same?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me it's a lot easier to understand this algorithm when it's described as happening at the end of each report window, not when a trigger is handled.

Copy link
Collaborator

@yanzhangnyc yanzhangnyc Nov 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@csharrison @giladbarkan-github , Depending on how to undertand "end of trigger window". For example, at end of the trigger window of trigger_data 1, does 1) all triggers processed or 2) only triggers containing trigger_data 1 is processed? If 2), the trigger data with earlier reporting window will have the highest priority. If 1), it might be even more complicated than handling it at trigger time. Let me think about any other complication for this option.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanzhangnyc The PR says "iterate through triggers in order". I take that to mean all triggers associated with this source.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanzhangnyc The PR says "iterate through triggers in order". I take that to mean all triggers associated with this source.

If so, we need to emphasize "at the end window of any trigger data" instead of just simply mention "trigger window end".

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the wording. PTAL

csharrison marked this conversation as resolved.
Show resolved Hide resolved
1. Iterate through triggers in order, "applying" them to generate a list of "speculative" reports. Stop when privacy limits are hit.
csharrison marked this conversation as resolved.
Show resolved Hide resolved
2. Flush all of the speculative reports that are scheduled to be emitted in the current window, and update the source's state based on all of the triggers that were successfully applied.
csharrison marked this conversation as resolved.
Show resolved Hide resolved
3. Erase all of the triggers associated with the current spec and window.

### Trigger-data modulus matching example

Expand Down