-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce wait period to avoid flakiness #8701
Reduce wait period to avoid flakiness #8701
Conversation
Signed-off-by: Gabriel Fukushima <[email protected]>
Signed-off-by: Gabriel Fukushima <[email protected]>
firstTekuNode.waitForEpochAtOrAbove(2); | ||
firstTekuNode.waitForEpochAtOrAbove(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we're changing this anyway, i'd change this back to 2, and i'd change the genesis wait time to 60 seconds to let both nodes be online close to genesis...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but i'd also be starting node 2 prior to waiting for epoch 2 i think...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This wouldn't work because we would have to wait until epoch 2 to start the second node in the current logic. This would only drag the test length to a longer period of waiting. What we could do is possibly change the slashing to be in a block for the epoch 4 or 5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the BN will come up slow, so if the second BN starts before genesis and BOTH BN wait for the 2 epochs, then the second VC starts, by then the world order is a lot less ugly i think you'll find.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't think we should increase the lenght of genesis from 10 to 60 and make this longer than it needs to be, test usually succeeds in 50 secs. So making it longer for no reason doesn't sound a better solution. I'm ok with starting beacon2 earlier.
Signed-off-by: Gabriel Fukushima <[email protected]>
Signed-off-by: Gabriel Fukushima <[email protected]>
Signed-off-by: Paul Harris <[email protected]>
final TekuBeaconNode secondBeaconNode = | ||
createTekuBeaconNode( | ||
TekuNodeConfigBuilder.createBeaconNode() | ||
.withGenesisTime(genesisTime) | ||
.withNetwork(network) | ||
.withRealNetwork() | ||
.withRealNetwork() | ||
.withAltairEpoch(altairEpoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is removal of withAltairEpoch
intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think so, although this was added by Paul's commit, we never actually reached the epoch where Altair was set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it was pointless to have altair at epoch 100 when we're finishing in a few epochs...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* reduce wait period to avoid flakiness Signed-off-by: Gabriel Fukushima <[email protected]> * Reduce parallelism for ATs Signed-off-by: Gabriel Fukushima <[email protected]> --------- Signed-off-by: Gabriel Fukushima <[email protected]>
PR Description
This PR reduces the wait to post the Slashing event and start the second node from the second to the first epoch.
It also reduces parallelism for the ATs since last run shows some of these are peeking near 100% of CPU usage.
Fixed Issue(s)
Fixes #8665
Documentation
doc-change-required
label to this PR if updates are required.Changelog