This suite consists of Loki benchmarks tests for multiple scenarios. Each scenario asserts recorded measurements against a selected profile from the config directory:
-
Write benchmarks:
- High Volume Writes: Measure
CPU
,MEM
andQPS
,p99
,p50
avg
request duration for all 2xx write requests to all Loki distributor and ingester pods.
- High Volume Writes: Measure
-
Read benchmarks:
- High Volume Reads: Measure
QPS
,p99
,p50
andavg
request duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - High Volume Aggregate: Measure
QPS
,p99
,p50
andavg
request duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - High Volume Aggregate: Measure
QPS
,p99
,p50
andavg
request duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods. - Dashboard queries: Measure
QPS
,p99
,p50
andavg
request duration for all 2xx read requests to all Loki query-frontend, querier and ingester pods.
- High Volume Reads: Measure
- Software:
gnuplot
Note: Install on Linux environment, e.g. on Fedora using: sudo dnf install gnuplot
- Required software:
kubectl
- Repositories:
- Observatorium
- Optional: Cadvisor
Note: Clone git repositories into sibling directories to the loki-benchmarks
one.
Note: Cadvisor is only required if measuring CPU and memory of the container. In addition, change the value of the enableCadvisorMetrics
key in the configuration to be true
. It is false
by default.
- Configure the parameters (
config/loki-parameters
) and deploy Loki & configure Prometheus:make deploy-obs-loki
- Run the benchmarks:
make bench-dev
- Required software:
oc
,aws
- Cluster Size:
m4.16xlarge
- Configure benchmark parameters
config/loki-parameters
- Create S3 bucket:
make deploy-s3-bucket
- Deploy prometheus
make deploy-ocp-prometheus
- Download loki observatorium template locally
make download-obs-loki-template
- Deploy Loki
make deploy-ocp-loki
- Run the benchmarks:
make ocp-run-benchmarks
Note: For additional details and all-in-one commands use: make help
Upon benchmark execution completion, results are available in the reports/date+time
folder.
Uninstall using: make ocp-all-cleanup
.
- Declare a new scenario with expected measurement values for each profile in the config directory.
- Extend the golang
Scenarios
struct in internal/config/config.go with the new scenario. - Add a new
_test.go
file in the benchmarks directory. - When using
cluster-logging-load-client
as logger, thecommand
configuration parameter is either generate or query and
all otherargs
configuration parameters are described in https://github.com/ViaQ/cluster-logging-load-client - Overriding
url
andtenant
requires that the logger implementation provides such named CLI flags
$ make bench-dev
Example output:
Running Suite: Benchmarks Suite
===============================
Random Seed: 1597237201
Will run 1 of 1 specs
• [MEASUREMENT]
Scenario: High Volume Writes
/home/username/dev/loki-benchmarks/benchmarks/high_volume_writes_test.go:18
should result in measurements of p99, p50 and avg for all successful write requests to the distributor
/home/username/dev/loki-benchmarks/benchmarks/high_volume_writes_test.go:32
Ran 10 samples:
All distributor 2xx Writes p99:
Smallest: 0.087
Largest: 0.096
Average: 0.092 ± 0.003
All distributor 2xx Writes p50:
Smallest: 0.003
Largest: 0.003
Average: 0.003 ± 0.000
All distributor 2xx Writes avg:
Smallest: 0.370
Largest: 0.594
Average: 0.498 ± 0.085
------------------------------
On each run a new time-based report directory is created under the reports directory. Each report includes:
- Summary
README.md
with all benchmark measurements. - A CSV file for each specific measurement.
- A GNUPlot file for each specific measurement to transform the data into a PNG graph.
Example output:
reports
├── 2020-08-12-10-33-31
├── All-distributor-2xx-Writes-avg.csv
├── All-distributor-2xx-Writes-avg.gnuplot
├── All-distributor-2xx-Writes-avg.gnuplot.png
├── All-distributor-2xx-Writes-p50.csv
├── All-distributor-2xx-Writes-p50.gnuplot
├── All-distributor-2xx-Writes-p50.gnuplot.png
├── All-distributor-2xx-Writes-p99.csv
├── All-distributor-2xx-Writes-p99.gnuplot
├── All-distributor-2xx-Writes-p99.gnuplot.png
├── junit.xml
└── README.md
During benchmark execution, use hack/ocp-deploy-grafana.sh to deploy grafna and connect to Loki as a datasource:
- Use a web browser to access grafana UI. The URL, username and password are printed by the script
- In the UI, under settings -> data-sources hit
Save & test
to verify that Loki data-source is connected and that there are no errors - In explore tab change the data-source to
Loki
and use{client="promtail"}
query to visualize log lines - Use additional queries such as
rate({client="promtail"}[1m])
to verify the behaviour of Loki and the benchmark