This repository has been archived by the owner on Dec 18, 2023. It is now read-only.
v0.1.2
Full Changelog: v0.1.1...v0.1.2
New Features
- Supports accelerated inference on HMC and NUTS with functorch’s Neural Network Compiler (NNC), which can be controlled setting
nnc_compile
flag when initializing an inference method (#1385) (Docs) - Supports parallel sampling when number of chains > 1, which can be controlled by setting
run_in_parallel
flag when callinginfer
(#1369) - Added progress bar to
BMGInference
(#1321) MonteCarloSamples
object returned from an inference will contain log likelihood and observations now (#1269)- Reworked
bm.simulate
, which accepts a dictionary of posterior as inputs as well (#1474) - Binary wheels for M1 Apple Silicon and Python 3.10 are included in the release (#1419, #1507)
Changes
- The default number of adaptive samples will be algorithm-specific now. For most of the algorithms, the default number of adaptive samples is still 0. For HMC and NUTS, the default is changed to half of number of samples (i.e.
num_samples // 2
) (#1353) - In
CompositionalInference
, the default algorithm for continuous latent variables is changed to NUTS (GlobalNoUTurnSampler
) (#1407).
Fixes
- Resolved deprecation warnings to support PyTorch 1.11 (#1378) (Note: PyTorch 1.12 is also supported now)
Documentations
- Added a Bayesian structural time series tutorial (#1376) (link to tutorial)
- Used the experimental NNC compile feature in supported tutorials (#1408)
- Added MiniBM, a minimal and standalone implementation of Bean Machine in around a hundred lines of code (excluding comments) (#1415) (minibm.py)