You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is an accepted measurement method to repeat the experiments at least 3x and then take the averages of the values. Would it be possible to implement such a config option? This could be really informative with respect to memory consumption and runtime cost evaluation.
The text was updated successfully, but these errors were encountered:
Yeah, I think this makes sense. A question is, how to handle such a measurement? We do not want to take the average of every run. Some statistics, like the number of bugs found, or the number of nodes in the exploded graph supposed to be the same across runs. Maybe we can also validate how deterministic the analyzer is.
Yeah, I think this makes sense. A question is, how to handle such a measurement? ... We do not want to take the average of every run.
I think we should be able to specify how many times we want to measure the projects (or maybe a fine-grained per project option would be useful too).
IMO, for most statistic we need at least 3 values (min, average, max). But for some other values (e.g. run-time, peak resident memory usage) it could be useful to see the percentiles as well. Maybe it would be a nice option to be able to display these values with a candlestick in the charts.
Some statistics, like the number of bugs found, or the number of nodes in the exploded graph supposed to be the same across runs.
Yes, that's right, we should report any discrepancies.
It is an accepted measurement method to repeat the experiments at least 3x and then take the averages of the values. Would it be possible to implement such a config option? This could be really informative with respect to memory consumption and runtime cost evaluation.
The text was updated successfully, but these errors were encountered: