Releases: cadet/CADET-Match
PyMoo 0.6 release
Arviz dependency
This release just adds a dependency on Arviz and cleans up a few minor things
PyMOO update
This updates CADETMatch to use the current version of pymoo
Dependencies updates
This release just changes the dependencies on pymoo and attrs so that the right versions should get installed
KDE prior and MLE calculation
MLE is now calculated by looking at the highest probability on the chain
Prior calculation is now moved into a separate process from MLE
auto_keq fix and MCMC fixes
auto_keq now works with index
There are also many MCMC fixes. numpy.percentile has been removed and instead arviz.hdi is used. This handles asymmetric distributions much better. Generation of the KDE error model doesn't remove outliers now. Due to how the sampling is done the outliers are not actually outliers.
Smoothing
Based on work in my thesis the smoothing code has been updated. The normalized root mean squared error is used to set a minimum value the smoothing can't go under. This is set to 1e-4 by default and was found to be a good value after testing. Effectively this means values smaller than 1e-4 * the peak max will get smoothed out of the system. This prevents a lot of noise that was sometimes left in then the L-point indicated that values as low as 1e-7 * peak max could be kept
This release also has various other small fixes in it
PyMOO
DEAP has been replaced with PyMOO and a lot of code has been removed as a result. PyMOO implements a more refined version of NSGA3 that works much better. This version should converge faster and closer to the optimum without needing a gradient step. So far in testing this has sped up overall performance.
Documentation has been updated.
minor fix in print_version
print_version was looking at the wrong version of the CADET-Python library
bessel filtering and minor fixes
Smoothing has changed from using butter filters to bessel filters based on testing and a problem of ringing with sharp pulses and butter filters. The resampling process for very densely sampled datasets or datasets with inconsistent time steps is also cleaner.