Skip to content

Commit

Permalink
Add workflow for longitudinal benchmarking (IntersectMBO#5205)
Browse files Browse the repository at this point in the history
* New benchmark workflow

* alert threshold

* upd

* Addressed some comments

* WIP

* Applied requested changes
  • Loading branch information
zeme-wana authored and v0d1ch committed Dec 6, 2024
1 parent 1f73baf commit 8ead840
Show file tree
Hide file tree
Showing 2 changed files with 78 additions and 0 deletions.
55 changes: 55 additions & 0 deletions .github/workflows/longitudinal-benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Longitudinal Benchmarks
#
# This workflow will run the benchmarks defined in the environment variable BENCHMARKS.
# It will collect and aggreate the benchmark output, format it and feed it to github-action-benchmark.
#
# The benchmark charts are live at https://input-output-hk.github.io/plutus/dev/bench
# The benchmark data is available at https://input-output-hk.github.io/plutus/dev/bench/data.js

name: Longitudinal Benchmarks

on:
push:
branches:
- master

permissions:
# Deployments permission to deploy GitHub pages website
deployments: write
# Contents permission to update benchmark contents in gh-pages branch
contents: write

jobs:
longitudinal-benchmarks:
name: Performance regression check
runs-on: [self-hosted, plutus-benchmark]
steps:
- uses: actions/[email protected]

- name: Run benchmarks
env:
BENCHMARKS: "validation validation-decode"
run: |
for bench in $BENCHMARKS; do
2>&1 cabal run "$bench" | tee "$bench-output.txt"
done
python ./scripts/format-benchmark-output.py
- name: Store benchmark result
uses: benchmark-action/[email protected]
with:
name: Plutus Benchmarks
tool: 'customSmallerIsBetter'
output-file-path: output.json
github-token: ${{ secrets.GITHUB_TOKEN }}
# Push and deploy GitHub pages branch automatically
auto-push: true
# Enable alert commit comment
comment-on-alert: true
# Mention @input-output-hk/plutus-core in the commit comment
alert-comment-cc-users: '@input-output-hk/plutus-core'
# Percentage value like "110%".
# It is a ratio indicating how worse the current benchmark result is.
# For example, if we now get 110 ns/iter and previously got 100 ns/iter, it gets 110% worse.
alert-threshold: '105%'
23 changes: 23 additions & 0 deletions scripts/format-benchmark-output.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import json
import os

result = []

for benchmark in os.getenv("BENCHMARKS").split():
with open(f"{benchmark}-output.txt", "r") as file:
name = ""
for line in file.readlines():
if line.startswith("benchmarking"):
name = line.split()[1]
elif line.startswith("mean"):
parts = line.split()
mean = parts[1]
unit = parts[2]
result.append({
"name": f"{benchmark}-{name}",
"unit": unit,
"value": float(mean)
})

with open("output.json", "w") as file:
json.dump(result, file)

0 comments on commit 8ead840

Please sign in to comment.