-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cache-based duplicate log detection to SafeEthClient #180
Conversation
1420d57
to
e978a5f
Compare
d8010ef
to
837b176
Compare
837b176
to
f47114d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation looks fine (client.go
) but the tests seem quite complicated to follow. I'm approving now to merge ASAP as requested
@@ -187,7 +200,7 @@ func (c *SafeEthClient) SubscribeFilterLogs(ctx context.Context, q ethereum.Filt | |||
rangeStartBlock = 0 | |||
} | |||
|
|||
fromBlock := max(lastBlock, rangeStartBlock) + 1 | |||
fromBlock := max(lastBlock, rangeStartBlock+1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a bugfix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More or less, this is related to this part of the description:
It also makes it so our log filtering starts from the last block, since we now can detect duplicate blocks - so if, e.g. there were 2 block emissions on lastBlock and we'd only gotten 1 before filtering logs, we don't lose the second one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if our log range start is based on the last block, then it's from lastBlock -> end instead of lastBlock + 1 -> end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practical terms this doesn't happen to us, but it is a fix overall now that we can detect duplicates
Currently, we are using simple block logic to skip possibly duplicate blocks. This is enough for our uses, since the log rate for the events should be 1 per block at most in a real scenario, but it's not a full solution. This PR introduces an LRU cache of log hashes (defined by ourselves) that is used to detect whether a log has been relayed yet or not. It also makes it so our log filtering starts from the last block, since we now can detect duplicate blocks - so if, e.g. there were 2 block emissions on
lastBlock
and we'd only gotten 1 before filtering logs, we don't lose the second one.