You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there 👋 I'm looking to add another Scala GraphQL library to the benchmarks. When I first looked at the current results I was a bit surprised that Sangria (Scala) was performing this badly. After looking at the benchmarking script, it seems that the servers are not warmed up prior to being benchmarked.
The JVM has a notoriously bad cold-start performance (see related post here), so any GraphQL libraries from JVM languages (Java, Kotlin, Scala, etc.) being benchmarked will show as having very poor performance if the server is not warmed up prior to measuring request latencies.
A very simple way to warmup the servers is to execute this line twice, discarding the results from the first invocation. Ideally, the first invocation should be with a smaller number of connections to prevent overwhelming the server as it warms up. I believe that this might also help resolve #217.
I'm happy to create a PR if you agree with this approach
The text was updated successfully, but these errors were encountered:
it doesn't really test GraphQL capabilities. Instead it tries to benchmark the webserver.
static-rust is a literal webserver benchmark, and the difference between that one and async-graphql/juniper is over 2x. So it's far from "only" benchmarking webservers. But it could certainly be improved (#190), note that this was a side project that I hacked together in a weekend 😉
Hi there 👋 I'm looking to add another Scala GraphQL library to the benchmarks. When I first looked at the current results I was a bit surprised that Sangria (Scala) was performing this badly. After looking at the benchmarking script, it seems that the servers are not warmed up prior to being benchmarked.
The JVM has a notoriously bad cold-start performance (see related post here), so any GraphQL libraries from JVM languages (Java, Kotlin, Scala, etc.) being benchmarked will show as having very poor performance if the server is not warmed up prior to measuring request latencies.
A very simple way to warmup the servers is to execute this line twice, discarding the results from the first invocation. Ideally, the first invocation should be with a smaller number of connections to prevent overwhelming the server as it warms up. I believe that this might also help resolve #217.
I'm happy to create a PR if you agree with this approach
The text was updated successfully, but these errors were encountered: