Skip to content

Latest commit

 

History

History
37 lines (28 loc) · 2.13 KB

README.md

File metadata and controls

37 lines (28 loc) · 2.13 KB

Bayesian Benchmarks

Build Status codecov

This is a set of tools for evaluating Bayesian models, together with benchmark implementations and results.

Motivations:

  • There is a lack of standardized tasks that meaningfully assess the quality of uncertainty quantification for Bayesian black-box models.
  • Variations between tasks in the literature make a direct comparison between methods difficult.
  • Implementing competing methods takes considerable effort, and there little incentive to do a good job.
  • Published papers may not always provide complete details of implementations due to space considerations.

Aims:

  • Curate a set of benchmarks that meaningfully compare the efficacy of Bayesian models in real-world tasks.
  • Maintain a fair assessment of benchmark methods, with full implementations and results.

Tasks:

  • Classification and regression
  • Density estimation (real world and synthetic) (TODO)
  • Active learning
  • Adversarial robustness (TODO)

Current implementations:

See the models folder for instruction for adding new models.

Coming soon: