Skip to content

Occam-ML-Lab/bayesian_benchmarks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bayesian Benchmarks

Build Status codecov

This is a set of tools for evaluating Bayesian models, together with benchmark implementations and results.

Motivations:

  • There is a lack of standardized tasks that meaningfully assess the quality of uncertainty quantification for Bayesian black-box models.
  • Variations between tasks in the literature make a direct comparison between methods difficult.
  • Implementing competing methods takes considerable effort, and there little incentive to do a good job.
  • Published papers may not always provide complete details of implementations due to space considerations.

Aims:

  • Curate a set of benchmarks that meaningfully compare the efficacy of Bayesian models in real-world tasks.
  • Maintain a fair assessment of benchmark methods, with full implementations and results.

Tasks:

  • Classification and regression
  • Density estimation (real world and synthetic) (TODO)
  • Active learning
  • Adversarial robustness (TODO)

Current implementations:

See the models folder for instruction for adding new models.

Coming soon:

About

A community repository for benchmarking Bayesian methods

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 93.9%
  • Python 6.1%