-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify multiple fio tests to be run sequentially #68
Comments
Sometimes the results differ based on the size of the volume being used. Are you running fio against the Persistent Volume or against local storage? |
Hi @bathina2, Thanks for replying 😄 I just ran both benchmarks tools on a lab-dedicated cluster to eliminate any side-workload pollution, using same size PVC (100GB). There is a difference between kubestr results, and fio cli results with same parameters.
Table data source in "3 runs summary" section right under Maybe it comes from #21 ? Is the GO fio lib using same default parameters as fio cli ? Is there a bottleneck due to go implementation ? (I'm not a GO dev, so I can't investigate this way). 3 runs summaryHere is the summary of 3 differents run of boths :
DetailsBenchmark setup :
Here is detailed raw result with ksb (PVC 100Gi) :
And a detailed run of kubestr on the very same cluster :
|
@AlexisDucastel I ran kubestr using your image using I do suspect it depends on how the fio command is being invoked. In your script you are calling the fio command multiple times with different configurations. However, Kubestr is calling it using an FIO file with multiple jobs- I may have assumed these jobs are serialized. Can you try with this fio file and see if you have similar results. |
@AlexisDucastel Yep that was it. I ran it with a single job |
This may be a low priority fix. It can be done already with the given cli. However if this was to be supported we need to find a way to better collect the results. |
Adding |
Hi there,
First of all, thanks for you work on this tool. I've written my own script for benchmarking storage on Kubernetes : ksb.
My results are very differents from kubestr, so I tried to search where was the diff between our tests.
With kubestr cli, i've got between 734 and 849 IOPS in 3 differents run, here is the raw result of the best one :
As we can see in kubestr output, it used fio-3.20 with parameters :
My Read IOPS benchmark is using exactly same parameters, but adding
--time_based --ramp_time=2s --runtime=15s
parameters. Results of 3 differents runs are between 10.6k IOPS and 12.9k IOPS. Here is the raw result of one run:Both docker images are based on alpine (
ghcr.io/kastenhq/kubestr:latest
andinfrabuilder/iobench
). So it does not comes from base OS difference. Anyway, my image is shipped with fio-3.25, and yours contains fio-3.20. To eliminate the image difference hyptohesis, I started a pod mounting a PVC from the exact same storageClass mounted on/root
, using your image (ghcr.io/kastenhq/kubestr:latest
) and overriding entrypoint with/bin/sh
to let me do exec command in it. Here is the result of a manually launched fio command :Here the result is very similar to the one I got on my benchmark : 10,8k IOPS (I've run it multiple times, results are between 9729 and 10.8k IOPS)
So to sum up :
If my benchmark and in your image with manual command, the stddev is very high (about 5k) due to the fact that it is a used cluster and not a lab isolated dedicated to the bench, but even wih this deviation there is still a large gap between kubestr results and fio results. I may test in isolated lab, as I do for CNI benchmark, but for now I lack of time :)
Can you explain why kubestr and fio commands results differs so much, even with same image ?
Thanks.
The text was updated successfully, but these errors were encountered: