-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
postsd much slower than postmean #134
Comments
@william-denault Could you please share your benchmarking results? |
expr min lq mean median uq max neval |
Yes, indeed it is about 2–3x slower. I don't see that as being a big problem, do you? Were there other settings where the difference in runtime was much larger? |
I was profiling fsusie, which uses these two functions a lot at every loop (2^S x number of covariates) so I just thought that could be place to gain speed (I will have a look at the counterpart of postsd and postmean in ebnm as Matthew suggested to me). |
Okay, let us know what you find out. |
I have run some benchmark comparison and ebnm is actually 1.5 slower than ash when computing posterior quantities. I see the same pattern (posterior sd computation being slower than posterior mean). Furthermore it is seems that running ash with outputlevel= 0 is almost as fast as ebnm using prior_family_family="normal_scale_mixture". To be fair ebnm uses more component than ash. |
That's helpful to know, thanks William! |
Hello,
I was profiling some code, and I noticed that postsd was much slower than postmean (for normal mixture), which seems strange, given how these quantities are computed.
Please find a minimal example below:
`library(ashr)
library(microbenchmark)
Bhat <- rnorm(10000)
Shat <- runif(10000)
out <- ash(Bhat,Shat, mixcompdist="normal")
Bhat <- rnorm(10000)
Shat <- runif(10000)
m <- set_data(Bhat, Shat)
microbenchmark(
postmean = postmean( get_fitted_g(out),m),
postsd = postsd( get_fitted_g(out),m)
)
`
The text was updated successfully, but these errors were encountered: