Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] I-BNN tutorial benchmarks need updating #2637

Open
slishak-PX opened this issue Nov 21, 2024 · 3 comments
Open

[Bug] I-BNN tutorial benchmarks need updating #2637

slishak-PX opened this issue Nov 21, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@slishak-PX
Copy link
Contributor

🐛 Bug

In the documentation for BoTorch 0.11.3, I-BNN is shown to outperform standard GPs with Matern and RBF kernels:
https://botorch.org/v/0.11.3/tutorials/ibnn_bo

In the 0.12.0 documentation, the Matern kernel's performance increases substantially and dominates the others, contradicting the text:
https://botorch.org/v/0.12.0/tutorials/ibnn_bo

I initially assumed this was due to the dimension-dependent LogNormal prior, but it doesn't look like this is used in the tutorial.

@slishak-PX slishak-PX added the bug Something isn't working label Nov 21, 2024
@saitcakmak
Copy link
Contributor

Thanks for flagging. Looks like it was switch to LogEI #2499 that helped Matern kernel. I generally dislike such comparisons in tutorials. We need tutorials to be runnable with little compute for the CI to run within a reasonable time, and comparing multiple methods on a single replication just produces unreliable results.

@Balandat
Copy link
Contributor

Interesting. Impressive improvement from logEI, but I agree with @saitcakmak that these performance comparisons aren't the best fit for the tutorials. I think if we do have them then we should be setting them up in a way that can reliably produce results in a full (non- smoke test) run. Everything else should belong into some paper or more comprehensive benchmark.

cc @sdaulton, @SebastianAment

@slishak-PX
Copy link
Contributor Author

I agree with the points about removing benchmarks from tutorials unless they're fully reproducible. However, with the diverse set of models in BoTorch that individually claim to perform well at high dimensional optimisation (e.g. SAASBO as another example), it would be great to tie them all together somewhere (whether that's in the docs or in a paper) and have it prominently displayed in the docs!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants