-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gamma optimizations and benchmarks #69
Conversation
benchmark/benchmark_runner.cljc
Outdated
(->> (ns-publics (find-ns (symbol namespace))) | ||
(keep (fn [[sym var]] (when (:benchmark (meta var)) sym))))) | ||
|
||
(defn run-benchmarks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can do this with our normal test runner, without adding this whole namespace... https://github.com/cognitect-labs/test-runner see the include and exclude options: https://github.com/cognitect-labs/test-runner?tab=readme-ov-file#invoke-with-clojure--m-clojuremain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good call. I swapped in test runner—much better.
(:require [gen.distribution.math.gamma :as g] | ||
[criterium.core :as crit])) | ||
|
||
(defn ^:benchmark lanczos-approximation-computation-time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should be able to use deftest
and then the test selectors I mentioned above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, done.
(crit/quick-bench | ||
(dotimes [_ n] | ||
(g/inv-gamma-1pm1 x))))] | ||
(mapv f x))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for side effects, use run!
instead of mapv
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, done.
(Math/log (* 2 Math/PI))) | ||
|
||
(def ^:no-doc sqrt-2pi | ||
(def ^:no-doc ^:const sqrt-2pi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wild, I have some annotations I need to add in Emmy!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So good!
@sritchie I addressed the comments and benchmark now ride on test runner, boom. I'm assuming that when a benchmark test runs, we want it to fail if it diverged within some threshold of a baseline. To define a baseline, it seems reasonable to have something like this: (def ^:no-doc ^:const lanczos-approximation-baseline
[{:n 15
:x 9
:mean 3.127368933269932E-6
:variance 1.5780434242573877E-14
:lower-q 2.984515105404977E-6
:upper-q 3.320172412426692E-6}
{:n 15
:x 100
:mean 2.0160584455226908E-7
:variance 7.928671640576774E-18
:lower-q 1.9907849968797706E-7
:upper-q 2.0489675953499433E-7}
{:n 15
:x 170
:mean 2.040524636004644E-7
:variance 5.795520133553684E-18
:lower-q 2.0120967339244923E-7
:upper-q 2.0719692107157572E-7}]) and then the actual benchmark test just maps over entries in the baseline and runs (deftest ^:benchmark lanczos-approximation-benchmark
[]
(let [f (fn[b]
(let [n (:n b)
x (:x b)
mean (:mean b)
variance (:variance b)
lower-q (:lower-q b)
upper-q (:upper-q b)
stats (crit/quick-benchmark
(dotimes [_ n]
(g/lanczos-approximation x)) {})]
(testing "against lanczos approximation baseline"
(with-comparator (within 1e-3)
(is (ish? mean (first (:mean stats))) x)
(is (ish? variance (first (:variance stats))) x)
(is (ish? lower-q (first (:lower-q stats))) x)
(is (ish? upper-q (first (:upper-q stats))) x)))))]
(run! f lanczos-approximation-baseline)))
```
This is implemented right now, so you can run `bb benchmark:clj` to see it work. |
(g/log-gamma x))))] | ||
(run! f x))) | ||
|
||
(deftest ^:benchmark gamma-benchmark |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this metadata do anything? how are we selecting these tests? Looks like now the way we select is by adding the benchmark
path to :extra-paths
... which is fine, but I don't see what the meta adds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, since :includes
(in bb.edn) is the collection of test metadata keywords to include, only functions with the :benchmark metadata will get run by test-runner.
x [0.1 1.5 2.5 8.0 10.0] | ||
f (fn[x] | ||
(println "log-gamma: " x) | ||
(crit/quick-bench |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bummer that criterium can't take a label!
{:git/tag "v0.5.1" :git/sha "dfb30dd"} | ||
criterium/criterium {:mvn/version "0.4.6"}} | ||
:main-opts ["-m" "cognitect.test-runner"] | ||
:exec-args {:includes [:benchmark] :dirs ["benchmark"] :patterns [".*-benchmark$"]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay I see here that we specify the test name pattern... but what does the metadata do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah good q! so from test-runner, :patterns
is for the test filenames to run and :includes
is the collection of test metadata keywords to include. so, this says "find all files with a name that ends with "benchmark" and in those files run any function with the :benchmark
metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment...
@sritchie yep, i responded—lmk if that explanation about |
💥 |
Turns out that pre-computing constants and type-hinting functions has a big performance gain:
(gamma 2.5)
77.40% speedup (execution mean 4.889986 µs → 1.104972 µs)(log-gamma 10)
63.92% speedup (execution mean 4.845715 µs → 1.748155 µs)(log-gamma-1p -0.05)
76.86% speedup (execution mean 4.981068 µs → 1.152443 µs)(inv-gamma-1pm1 -0.05)
77.85% speedup (execution mean 4.582663 µs → 1.015241 µs)(lanczos-approximation 170)
73.15% speedup (execution 4.276010 µs → 1.148002 µs)Did this for
math.gamma
andmath.utils
. Also experimenting with a way to define and run benchmarks like tests: Benchmarks are defined in the/benchmark
directory, then functions to benchmark are tagged with^:benchmark
and profiled with Criterium. Just runbb benchmark:clj
to compute benchmarks! Seegamma_benchmark.cljc
for an example.