You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Often we need to compare different patches of neo-go or even add some C#/mixed on top of it.
There are some problems:
Running benchmark on local machine can interfere with other programs running which affects results.
Again, because of system load multiple benchmark runs give more meaningful results. Mean/variation of block times/TPS in a single run is a bit different of same metrics for results of multiple runs. Former is related to how node processes transactions, while latter can help to make more valid conclusions. On my machine +-100TPS (with a mean of 1600) across multiple runs happens constantly, and 10% variation is something we should take into account.
Here is how we can make benchmark more flexible:
Allow to provide revisions to compare. Something like REVISIONS=master,patch1,patch2 make start.GoFourNodes10wrk. The interface is discussable. Note, that C# and mixed setups should also be allowed.
Emit plots for results from (1). We already have scripts for this, it can be extended.
I think doing all of this in a single command can also make results more reproducible and easier to share.
The text was updated successfully, but these errors were encountered:
Often we need to compare different patches of
neo-go
or even add some C#/mixed on top of it.There are some problems:
Here is how we can make benchmark more flexible:
REVISIONS=master,patch1,patch2 make start.GoFourNodes10wrk
. The interface is discussable. Note, that C# and mixed setups should also be allowed.I think doing all of this in a single command can also make results more reproducible and easier to share.
The text was updated successfully, but these errors were encountered: