Replies: 3 comments 5 replies
-
Additional thought: Add test runs on "interesting" hardware, like M1 or Raspberry. |
Beta Was this translation helpful? Give feedback.
-
As a data point informing an answer to question 3 above: Just ran the same (aarch64) docker image on two different AWS VM types: t4g.large and t4g.nano. The runtime difference for the large set of performance tests was 9 seconds. Compared to the complete runtime of 6h26min that's less than 0.04% --> Moving profiling runs to "nano" (V)machine instances is reasonable&possible without a negative impact on accuracy. Add/edit: No single measurement deviates more than 2% except for some McEliece tests (keygen) -- which is expected. |
Beta Was this translation helpful? Give feedback.
-
Data points pertaining to question 5 above:
vs
--> Speaks very much for using only OQS common code (as already is default for SHA3) for profiling. Anything speaking against that, @dstebila @christianpaquin @bhess ? |
Beta Was this translation helpful? Give feedback.
-
We currently run nightly profiling jobs for x86-64 and arm64 platforms publishing those results at https://openquantumsafe.org/benchmarking.
Now, it clearly is neither eco-/CO2-friendly nor economically sensible to run profiling too often, particularly if the frequency of new code merges is low.
Therefore this is to discuss how to improve things.
Some initial suggestions:
liboqs
"common" code (as already done for SHA3)?Beta Was this translation helpful? Give feedback.
All reactions