Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

postsd much slower than postmean #134

Open
william-denault opened this issue Aug 25, 2022 · 7 comments
Open

postsd much slower than postmean #134

william-denault opened this issue Aug 25, 2022 · 7 comments

Comments

@william-denault
Copy link

Hello,

I was profiling some code, and I noticed that postsd was much slower than postmean (for normal mixture), which seems strange, given how these quantities are computed.
Please find a minimal example below:

`library(ashr)
library(microbenchmark)

Bhat <- rnorm(10000)
Shat <- runif(10000)
out <- ash(Bhat,Shat, mixcompdist="normal")

Bhat <- rnorm(10000)
Shat <- runif(10000)
m <- set_data(Bhat, Shat)

microbenchmark(
postmean = postmean( get_fitted_g(out),m),
postsd = postsd( get_fitted_g(out),m)
)
`

@pcarbo
Copy link
Collaborator

pcarbo commented Aug 25, 2022

@william-denault Could you please share your benchmarking results?

@william-denault
Copy link
Author

expr min lq mean median uq max neval
postmean 102.7708 113.6098 134.5917 117.8156 130.2786 294.1863 100
postsd 225.2152 238.8804 277.1865 252.2738 325.9447 420.9524 100

@pcarbo
Copy link
Collaborator

pcarbo commented Aug 25, 2022

Yes, indeed it is about 2–3x slower. I don't see that as being a big problem, do you? Were there other settings where the difference in runtime was much larger?

@william-denault
Copy link
Author

william-denault commented Aug 25, 2022

I was profiling fsusie, which uses these two functions a lot at every loop (2^S x number of covariates) so I just thought that could be place to gain speed (I will have a look at the counterpart of postsd and postmean in ebnm as Matthew suggested to me).

@pcarbo
Copy link
Collaborator

pcarbo commented Aug 25, 2022

Okay, let us know what you find out.

@william-denault
Copy link
Author

I have run some benchmark comparison and ebnm is actually 1.5 slower than ash when computing posterior quantities. I see the same pattern (posterior sd computation being slower than posterior mean). Furthermore it is seems that running ash with outputlevel= 0 is almost as fast as ebnm using prior_family_family="normal_scale_mixture". To be fair ebnm uses more component than ash.

@pcarbo
Copy link
Collaborator

pcarbo commented Aug 29, 2022

That's helpful to know, thanks William!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants