Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify multiple fio tests to be run sequentially #68

Open
AlexisDucastel opened this issue Apr 21, 2021 · 6 comments
Open

Specify multiple fio tests to be run sequentially #68

AlexisDucastel opened this issue Apr 21, 2021 · 6 comments
Labels
enhancement New feature or request low priority

Comments

@AlexisDucastel
Copy link

AlexisDucastel commented Apr 21, 2021

Hi there,

First of all, thanks for you work on this tool. I've written my own script for benchmarking storage on Kubernetes : ksb.

My results are very differents from kubestr, so I tried to search where was the diff between our tests.

With kubestr cli, i've got between 734 and 849 IOPS in 3 differents run, here is the raw result of the best one :

> ./kubestr fio -s rook-ceph-block
PVC created kubestr-fio-pvc-g8gwx
Pod created kubestr-fio-pod-7tspk
Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi)
Elapsed time- 1m23.539474031s
FIO test results:

FIO version - fio-3.20
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=849.953735 BW(KiB/s)=3416
  iops: min=600 max=1053 avg=854.266663
  bw(KiB/s): min=2400 max=4215 avg=3418.100098

As we can see in kubestr output, it used fio-3.20 with parameters :

  • ioengine=libaio
  • verify=0
  • direct=1
  • gtod_reduce=1
  • blocksize=4K
  • filesize=2G
  • iodepth=64
  • rw=randread

My Read IOPS benchmark is using exactly same parameters, but adding --time_based --ramp_time=2s --runtime=15s parameters. Results of 3 differents runs are between 10.6k IOPS and 12.9k IOPS. Here is the raw result of one run:

read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
read_iops: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=0): [f(1)][100.0%][r=33.2MiB/s][r=8486 IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=29: Wed Apr 21 10:46:22 2021
  read: IOPS=12.9k, BW=50.4MiB/s (52.9MB/s)(757MiB/15012msec)
   bw (  KiB/s): min=21904, max=98400, per=100.00%, avg=52198.45, stdev=19676.11, samples=29
   iops        : min= 5476, max=24600, avg=13049.48, stdev=4919.07, samples=29
  cpu          : usr=2.58%, sys=6.54%, ctx=203273, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=193816,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=50.4MiB/s (52.9MB/s), 50.4MiB/s-50.4MiB/s (52.9MB/s-52.9MB/s), io=757MiB (794MB), run=15012-15012msec

Disk stats (read/write):
  rbd6: ios=212839/23, merge=0/3, ticks=769574/599, in_queue=567176, util=96.57%

Both docker images are based on alpine (ghcr.io/kastenhq/kubestr:latest and infrabuilder/iobench). So it does not comes from base OS difference. Anyway, my image is shipped with fio-3.25, and yours contains fio-3.20. To eliminate the image difference hyptohesis, I started a pod mounting a PVC from the exact same storageClass mounted on /root, using your image (ghcr.io/kastenhq/kubestr:latest) and overriding entrypoint with /bin/sh to let me do exec command in it. Here is the result of a manually launched fio command :

# fio --randrepeat=0 --verify=0 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=read_iops --filename=/root/fiotest --bs=4K --iodepth=64 --size=2G  --readwrite=randread
read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.20
Starting 1 process
read_iops: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=60.2MiB/s][r=15.4k IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=35: Wed Apr 21 10:25:15 2021
  read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(2048MiB/48355msec)
   bw (  KiB/s): min=12665, max=99328, per=99.89%, avg=43319.69, stdev=21317.01, samples=96
   iops        : min= 3166, max=24832, avg=10829.68, stdev=5329.23, samples=96
  cpu          : usr=1.70%, sys=5.80%, ctx=555542, majf=0, minf=71
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=524288,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=2048MiB (2147MB), run=48355-48355msec

Disk stats (read/write):
  rbd5: ios=519717/145, merge=0/27, ticks=2344134/3451, in_queue=1808204, util=95.04%

Here the result is very similar to the one I got on my benchmark : 10,8k IOPS (I've run it multiple times, results are between 9729 and 10.8k IOPS)

So to sum up :

  • kubestr default : 734 to 849 IOPS (stddev unknown)
  • ksb default : 10,6k to 12.9k IOPS (stddev 4919)
  • kubestr image with manual fio command : 9729 to 10.8k IOPS (stddev 5329)

If my benchmark and in your image with manual command, the stddev is very high (about 5k) due to the fact that it is a used cluster and not a lab isolated dedicated to the bench, but even wih this deviation there is still a large gap between kubestr results and fio results. I may test in isolated lab, as I do for CNI benchmark, but for now I lack of time :)

Can you explain why kubestr and fio commands results differs so much, even with same image ?

Thanks.

@bathina2
Copy link
Contributor

Sometimes the results differ based on the size of the volume being used.
By default, kubestr creates a PV of size 100Gi and runs i/o from the pod to the PV.

Are you running fio against the Persistent Volume or against local storage?

@AlexisDucastel
Copy link
Author

Hi @bathina2, Thanks for replying 😄

I just ran both benchmarks tools on a lab-dedicated cluster to eliminate any side-workload pollution, using same size PVC (100GB).

There is a difference between kubestr results, and fio cli results with same parameters.

test ksb kubestr
random_read_iops mean ~37k iops ~7k iops
random_write_iops mean 7 728 iops 2 965 iops
sequential_read_bw mean 2 395 MB/s 893 MB/s
sequential_write_bw mean 1 252 MB/s 372 MB/s

Table data source in "3 runs summary" section right under

Maybe it comes from #21 ? Is the GO fio lib using same default parameters as fio cli ? Is there a bottleneck due to go implementation ? (I'm not a GO dev, so I can't investigate this way).

3 runs summary

Here is the summary of 3 differents run of boths :

== ksb ====

  # RUN 1
  Random Read/Write IOPS: 40.1k/7725. BW: 1549MiB/s/ 761MiB/s
  Average Latency (usec) Read/Write: 297.39/5849.94
  Sequential Read/Write: 2301MiB/s / 1263MiB/s 
  Mixed Random Read/Write IOPS: 16.3k/5423

  # RUN 2
  Random Read/Write IOPS: 36.9k/7723. BW: 1598MiB/s/ 753MiB/s
  Average Latency (usec) Read/Write: 304.13/5728.55
  Sequential Read/Write: 2492MiB/s / 1264MiB/s
  Mixed Random Read/Write IOPS: 16.3k/5428

  # RUN 3
  Random Read/Write IOPS: 34.0k/7737. BW: 1601MiB/s/ 750MiB/s
  Average Latency (usec) Read/Write: 297.36/5898.81
  Sequential Read/Write: 2393MiB/s / 1231MiB/s
  Mixed Random Read/Write IOPS: 16.2k/5438

== kubestr ====

  # RUN 1
  read_iops : IOPS=7262.390137 BW(KiB/s)=29066
  write_iops : IOPS=3011.251709 BW(KiB/s)=12061
  read_bw : IOPS=6880.636719 BW(KiB/s)=881258
  write_bw : IOPS=2900.812744 BW(KiB/s)=371841

  # RUN 2
  read_iops : IOPS=6978.349121 BW(KiB/s)=27930
  write_iops : IOPS=2889.495361 BW(KiB/s)=11574
  read_bw : IOPS=6923.855957 BW(KiB/s)=886790
  write_bw : IOPS=2860.881836 BW(KiB/s)=366729

  # RUN 3
  read_iops : IOPS=7422.167480 BW(KiB/s)=29705
  write_iops : IOPS=2996.003906 BW(KiB/s)=12000
  read_bw : IOPS=7122.394531 BW(KiB/s)=912203
  write_bw : IOPS=2956.779541 BW(KiB/s)=379004

Details

Benchmark setup :

  • Cluster: 3 nodes k8s cluster on k3s 1.20.6, Calico
  • Nodes (each) : dual AMD EPYC 7262, 128Go RAM , 2*1To NVME Samsung PRO 980 PRO, NIC Intel XL710 40Gbit/s
  • OS : Ubuntu 20.04 with HWE kernel (5.8.0-50-generic)
  • Network : Dedicated Supermicro 40Gbit/s switch, nodes connected via passive copper QSFP+ DAC length < 1m
  • Rook 1.6 ceph v15.2.10-20210318, replicapool replicated RF3 nvme only, storage class RWO rook-ceph-block

Here is detailed raw result with ksb (PVC 100Gi) :

root@a11:/home/ubuntu# SIZE=100G ./ksb.sh rook-ceph-block
Working dir: /root

Testing Read IOPS...
read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
read_iops: Laying out IO file (1 file / 2048MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=147MiB/s][r=37.7k IOPS][eta 00m:00s]
read_iops: (groupid=0, jobs=1): err= 0: pid=46: Tue Apr 27 10:46:00 2021
  read: IOPS=38.2k, BW=149MiB/s (157MB/s)(2240MiB/15003msec)
   bw (  KiB/s): min=122328, max=172168, per=100.00%, avg=152990.97, stdev=11808.14, samples=30
   iops        : min=30582, max=43042, avg=38247.67, stdev=2951.99, samples=30
  cpu          : usr=6.82%, sys=23.66%, ctx=312142, majf=0, minf=162
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=573456,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=2240MiB (2349MB), run=15003-15003msec

Disk stats (read/write):
  rbd0: ios=645294/141, merge=0/6, ticks=792122/597, in_queue=792718, util=99.51%


Testing Write IOPS...
write_iops: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=30.4MiB/s][w=7782 IOPS][eta 00m:00s]
write_iops: (groupid=0, jobs=1): err= 0: pid=86: Tue Apr 27 10:46:18 2021
  write: IOPS=7208, BW=28.2MiB/s (29.5MB/s)(423MiB/15015msec); 0 zone resets
   bw (  KiB/s): min=26328, max=31863, per=100.00%, avg=28867.60, stdev=1705.24, samples=30
   iops        : min= 6582, max= 7965, avg=7216.73, stdev=426.28, samples=30
  cpu          : usr=2.35%, sys=7.70%, ctx=89241, majf=0, minf=153
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,108240,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=28.2MiB/s (29.5MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=423MiB (444MB), run=15015-15015msec

Disk stats (read/write):
  rbd0: ios=0/123851, merge=0/25, ticks=0/1033589, in_queue=1033588, util=99.46%


Testing Read Bandwidth...
read_bw: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=1605MiB/s][r=12.8k IOPS][eta 00m:00s]
read_bw: (groupid=0, jobs=1): err= 0: pid=126: Tue Apr 27 10:46:35 2021
  read: IOPS=12.9k, BW=1619MiB/s (1698MB/s)(23.7GiB/15008msec)
   bw (  MiB/s): min= 1504, max= 1744, per=100.00%, avg=1620.75, stdev=58.17, samples=30
   iops        : min=12038, max=13959, avg=12965.77, stdev=465.34, samples=30
  cpu          : usr=3.60%, sys=20.02%, ctx=172811, majf=0, minf=4175
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=194346,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=1619MiB/s (1698MB/s), 1619MiB/s-1619MiB/s (1698MB/s-1698MB/s), io=23.7GiB (25.5GB), run=15008-15008msec

Disk stats (read/write):
  rbd0: ios=221157/144, merge=0/8, ticks=979852/1116, in_queue=980967, util=99.54%


Testing Write Bandwidth...
write_bw: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=768MiB/s][w=6146 IOPS][eta 00m:00s]
write_bw: (groupid=0, jobs=1): err= 0: pid=166: Tue Apr 27 10:46:53 2021
  write: IOPS=6054, BW=757MiB/s (794MB/s)(11.1GiB/15016msec); 0 zone resets
   bw (  KiB/s): min=702976, max=813568, per=100.00%, avg=776050.23, stdev=23294.38, samples=30
   iops        : min= 5492, max= 6356, avg=6062.70, stdev=182.05, samples=30
  cpu          : usr=4.98%, sys=9.30%, ctx=54072, majf=0, minf=2221
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,90918,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=757MiB/s (794MB/s), 757MiB/s-757MiB/s (794MB/s-794MB/s), io=11.1GiB (11.9GB), run=15016-15016msec

Disk stats (read/write):
  rbd0: ios=0/102094, merge=0/8, ticks=0/1021047, in_queue=1021048, util=99.49%


Testing Read Latency...
read_latency: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=50.9MiB/s][r=13.0k IOPS][eta 00m:00s]
read_latency: (groupid=0, jobs=1): err= 0: pid=206: Tue Apr 27 10:47:11 2021
  read: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(758MiB/15001msec)
    slat (nsec): min=1780, max=324313, avg=8248.55, stdev=9018.16
    clat (usec): min=73, max=12560, avg=299.40, stdev=121.16
     lat (usec): min=182, max=12567, avg=307.81, stdev=120.98
    clat percentiles (usec):
     |  1.00th=[  210],  5.00th=[  233], 10.00th=[  245], 20.00th=[  260],
     | 30.00th=[  269], 40.00th=[  281], 50.00th=[  289], 60.00th=[  302],
     | 70.00th=[  314], 80.00th=[  330], 90.00th=[  355], 95.00th=[  383],
     | 99.00th=[  469], 99.50th=[  586], 99.90th=[ 1012], 99.95th=[ 1270],
     | 99.99th=[ 5735]
   bw (  KiB/s): min=48745, max=53555, per=99.90%, avg=51709.52, stdev=1168.71, samples=29
   iops        : min=12186, max=13388, avg=12927.31, stdev=292.16, samples=29
  lat (usec)   : 100=0.01%, 250=13.68%, 500=85.58%, 750=0.35%, 1000=0.27%
  lat (msec)   : 2=0.08%, 4=0.02%, 10=0.01%, 20=0.01%
  cpu          : usr=3.82%, sys=13.11%, ctx=168210, majf=0, minf=113
  IO depths    : 1=0.0%, 2=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=194119,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=758MiB (795MB), run=15001-15001msec

Disk stats (read/write):
  rbd0: ios=220153/146, merge=0/7, ticks=65215/533, in_queue=65748, util=99.51%


Testing Write Latency...
write_latency: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=2790KiB/s][w=697 IOPS][eta 00m:00s]
write_latency: (groupid=0, jobs=1): err= 0: pid=246: Tue Apr 27 10:47:28 2021
  write: IOPS=661, BW=2648KiB/s (2712kB/s)(38.8MiB/15006msec); 0 zone resets
    slat (nsec): min=1920, max=3907.7k, avg=36237.70, stdev=63117.02
    clat (usec): min=4316, max=17046, avg=6003.29, stdev=975.97
     lat (usec): min=4334, max=17054, avg=6039.90, stdev=977.57
    clat percentiles (usec):
     |  1.00th=[ 4752],  5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5473],
     | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5735],
     | 70.00th=[ 5866], 80.00th=[ 6390], 90.00th=[ 7308], 95.00th=[ 8455],
     | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[12125], 99.95th=[12911],
     | 99.99th=[17171]
   bw (  KiB/s): min= 2512, max= 2845, per=100.00%, avg=2649.33, stdev=112.86, samples=30
   iops        : min=  628, max=  711, avg=662.27, stdev=28.22, samples=30
  lat (msec)   : 10=99.58%, 20=0.45%
  cpu          : usr=0.37%, sys=1.35%, ctx=11898, majf=0, minf=58
  IO depths    : 1=0.0%, 2=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,9931,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=2648KiB/s (2712kB/s), 2648KiB/s-2648KiB/s (2712kB/s-2712kB/s), io=38.8MiB (40.7MB), run=15006-15006msec

Disk stats (read/write):
  rbd0: ios=0/11361, merge=0/6, ticks=0/67718, in_queue=67717, util=99.49%


Testing Read Sequential Speed...
read_seq: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
...
fio-3.25
Starting 4 threads
read_seq: Laying out IO file (1 file / 3548MiB)
Jobs: 4 (f=4): [R(4)][100.0%][r=2035MiB/s][r=2035 IOPS][eta 00m:00s]
read_seq: (groupid=0, jobs=1): err= 0: pid=286: Tue Apr 27 10:47:51 2021
  read: IOPS=575, BW=576MiB/s (604MB/s)(8689MiB/15085msec)
   bw (  KiB/s): min=382976, max=808960, per=24.95%, avg=590270.97, stdev=106351.44, samples=30
   iops        : min=  374, max=  790, avg=576.30, stdev=103.81, samples=30
  cpu          : usr=0.22%, sys=3.79%, ctx=8762, majf=0, minf=4101
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=8674,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
read_seq: (groupid=0, jobs=1): err= 0: pid=287: Tue Apr 27 10:47:51 2021
  read: IOPS=579, BW=581MiB/s (609MB/s)(8769MiB/15101msec)
   bw (  KiB/s): min=362496, max=776192, per=25.12%, avg=594193.93, stdev=111274.39, samples=30
   iops        : min=  354, max=  758, avg=580.07, stdev=108.68, samples=30
  cpu          : usr=0.19%, sys=3.72%, ctx=8817, majf=0, minf=4103
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=8754,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
read_seq: (groupid=0, jobs=1): err= 0: pid=288: Tue Apr 27 10:47:51 2021
  read: IOPS=581, BW=582MiB/s (611MB/s)(8798MiB/15104msec)
   bw (  KiB/s): min=411648, max=897277, per=25.24%, avg=596932.17, stdev=117668.12, samples=30
   iops        : min=  402, max=  876, avg=582.93, stdev=114.89, samples=30
  cpu          : usr=0.21%, sys=3.85%, ctx=8898, majf=0, minf=4106
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=8783,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
read_seq: (groupid=0, jobs=1): err= 0: pid=289: Tue Apr 27 10:47:51 2021
  read: IOPS=573, BW=574MiB/s (602MB/s)(8633MiB/15036msec)
   bw (  KiB/s): min=389120, max=743424, per=24.87%, avg=588279.20, stdev=92137.85, samples=30
   iops        : min=  380, max=  726, avg=574.30, stdev=90.00, samples=30
  cpu          : usr=0.26%, sys=3.63%, ctx=8998, majf=0, minf=4101
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=8618,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=2310MiB/s (2422MB/s), 574MiB/s-582MiB/s (602MB/s-611MB/s), io=34.1GiB (36.6GB), run=15036-15104msec

Disk stats (read/write):
  rbd0: ios=37598/2, merge=0/1, ticks=1039918/95, in_queue=1040013, util=95.50%


Testing Write Sequential Speed...
write_seq: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
...
fio-3.25
Starting 4 threads
Jobs: 4 (f=4): [W(4)][100.0%][w=1188MiB/s][w=1188 IOPS][eta 00m:00s]
write_seq: (groupid=0, jobs=1): err= 0: pid=329: Tue Apr 27 10:48:09 2021
  write: IOPS=289, BW=291MiB/s (305MB/s)(4432MiB/15249msec); 0 zone resets
   bw (  KiB/s): min=133386, max=385024, per=25.58%, avg=298442.87, stdev=61328.54, samples=30
   iops        : min=  130, max=  376, avg=291.33, stdev=59.93, samples=30
  cpu          : usr=1.97%, sys=1.99%, ctx=3453, majf=0, minf=0
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4417,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
write_seq: (groupid=0, jobs=1): err= 0: pid=330: Tue Apr 27 10:48:09 2021
  write: IOPS=286, BW=287MiB/s (301MB/s)(4374MiB/15231msec); 0 zone resets
   bw (  KiB/s): min=104448, max=376832, per=25.25%, avg=294606.67, stdev=66400.06, samples=30
   iops        : min=  102, max=  368, avg=287.60, stdev=64.80, samples=30
  cpu          : usr=2.02%, sys=1.90%, ctx=3238, majf=0, minf=0
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4359,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
write_seq: (groupid=0, jobs=1): err= 0: pid=331: Tue Apr 27 10:48:09 2021
  write: IOPS=281, BW=282MiB/s (296MB/s)(4297MiB/15234msec); 0 zone resets
   bw (  KiB/s): min=116736, max=366592, per=24.91%, avg=290595.97, stdev=59872.16, samples=30
   iops        : min=  114, max=  358, avg=283.70, stdev=58.52, samples=30
  cpu          : usr=1.84%, sys=2.07%, ctx=3097, majf=0, minf=0
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4282,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16
write_seq: (groupid=0, jobs=1): err= 0: pid=332: Tue Apr 27 10:48:09 2021
  write: IOPS=279, BW=281MiB/s (294MB/s)(4269MiB/15218msec); 0 zone resets
   bw (  KiB/s): min=106496, max=357066, per=24.69%, avg=288026.73, stdev=57796.18, samples=30
   iops        : min=  104, max=  348, avg=281.20, stdev=56.42, samples=30
  cpu          : usr=1.85%, sys=1.95%, ctx=3080, majf=0, minf=0
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,4254,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  WRITE: bw=1139MiB/s (1195MB/s), 281MiB/s-291MiB/s (294MB/s-305MB/s), io=16.0GiB (18.2GB), run=15218-15249msec

Disk stats (read/write):
  rbd0: ios=0/19434, merge=0/3, ticks=0/968627, in_queue=968627, util=96.12%


Testing Read/Write Mixed...
rw_mix: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=60.1MiB/s,w=20.1MiB/s][r=15.4k,w=5142 IOPS][eta 00m:00s]
rw_mix: (groupid=0, jobs=1): err= 0: pid=372: Tue Apr 27 10:48:26 2021
  read: IOPS=16.5k, BW=64.3MiB/s (67.5MB/s)(966MiB/15009msec)
   bw (  KiB/s): min=61098, max=70044, per=100.00%, avg=65950.90, stdev=2498.48, samples=30
   iops        : min=15274, max=17511, avg=16487.70, stdev=624.55, samples=30
  write: IOPS=5520, BW=21.6MiB/s (22.6MB/s)(324MiB/15009msec); 0 zone resets
   bw (  KiB/s): min=20576, max=23270, per=100.00%, avg=22109.20, stdev=737.84, samples=30
   iops        : min= 5144, max= 5817, avg=5527.13, stdev=184.48, samples=30
  cpu          : usr=5.64%, sys=17.62%, ctx=223689, majf=0, minf=158
  IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=247198,82860,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=64.3MiB/s (67.5MB/s), 64.3MiB/s-64.3MiB/s (67.5MB/s-67.5MB/s), io=966MiB (1013MB), run=15009-15009msec
  WRITE: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=324MiB (340MB), run=15009-15009msec

Disk stats (read/write):
  rbd0: ios=281213/94227, merge=0/3, ticks=271927/781055, in_queue=1052982, util=99.50%


All tests complete.

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 38.2k/7208. BW: 1619MiB/s/ 757MiB/s
Average Latency (usec) Read/Write: 307.81/6039.90
Sequential Read/Write: 2310MiB/s / 1139MiB/s
Mixed Random Read/Write IOPS: 16.5k/5520

And a detailed run of kubestr on the very same cluster :

root@a11:/home/ubuntu# ./kubestr fio -s rook-ceph-block
PVC created kubestr-fio-pvc-6g57t
Pod created kubestr-fio-pod-8vg7d
Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi)
Elapsed time- 23.788700232s
FIO test results:

FIO version - fio-3.20
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=7593.140137 BW(KiB/s)=30389
  iops: min=5487 max=8832 avg=7575.275879
  bw(KiB/s): min=21948 max=35328 avg=30301.517578

JobName: write_iops
  blocksize=4K filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=3047.327393 BW(KiB/s)=12206
  iops: min=2748 max=3367 avg=3042.758545
  bw(KiB/s): min=10992 max=13469 avg=12171.517578

JobName: read_bw
  blocksize=128K filesize=2G iodepth=64 rw=randread
read:
  IOPS=7096.563477 BW(KiB/s)=908897
  iops: min=6133 max=7996 avg=7098.965332
  bw(KiB/s): min=785117 max=1023488 avg=908686.375000

JobName: write_bw
  blocksize=128k filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=2983.352295 BW(KiB/s)=382406
  iops: min=2626 max=3291 avg=2976.689697
  bw(KiB/s): min=336223 max=421301 avg=381031.781250

Disk stats (read/write):
  rbd0: ios=250119/102559 merge=0/1067 ticks=2022985/1744046 in_queue=3767031, util=99.583794%
  -  OK

@bathina2
Copy link
Contributor

bathina2 commented May 4, 2021

@AlexisDucastel I ran kubestr using your image using ./kubestr fio -i infrabuilder/iobench -s do-block-storage and got the same results. So you are right, it has nothing to do with the image.

I do suspect it depends on how the fio command is being invoked. In your script you are calling the fio command multiple times with different configurations. However, Kubestr is calling it using an FIO file with multiple jobs-
[global] randrepeat=0 verify=0 ioengine=libaio direct=1 gtod_reduce=1 [job1] name=read_iops bs=4K iodepth=64 size=2G readwrite=randread time_based ramp_time=2s runtime=15s [job2] name=write_iops bs=4K iodepth=64 size=2G readwrite=randwrite time_based ramp_time=2s runtime=15s [job3] name=read_bw bs=128K iodepth=64 size=2G readwrite=randread time_based ramp_time=2s runtime=15s [job4] name=write_bw bs=128k iodepth=64 size=2G readwrite=randwrite time_based ramp_time=2s runtime=15s

I may have assumed these jobs are serialized. Can you try with this fio file and see if you have similar results.

@bathina2
Copy link
Contributor

bathina2 commented May 4, 2021

@AlexisDucastel Yep that was it. I ran it with a single job ./kubestr fio -s do-block-storage -f /tmp/fiojob where /tmp/fiojob is [global] randrepeat=0 verify=0 ioengine=libaio direct=1 gtod_reduce=1 [job1] name=read_iops bs=4K iodepth=64 size=2G readwrite=randread time_based ramp_time=2s runtime=15s and I see the results you are seeing.

@bathina2 bathina2 changed the title Benchmark results differs with kubestr and fio with same parameters Specify multiple fio tests to be run sequentially Oct 18, 2021
@bathina2
Copy link
Contributor

This may be a low priority fix. It can be done already with the given cli. However if this was to be supported we need to find a way to better collect the results.

@bathina2 bathina2 added enhancement New feature or request low priority labels Oct 18, 2021
@jason-matthew
Copy link

Adding wait_for_previous to your fio [job] definition is an option too
https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-stonewall

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request low priority
Development

No branches or pull requests

3 participants