-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated Likelihood Interface #29
base: master
Are you sure you want to change the base?
Conversation
Credit to @bbardoczy for suggesting I still need to figure out whether the user should be allowed to pass a numpy array within a nested dictionary. For ARMA(p,q) shocks, their parameters |
I cleaned up the estimation notebook considerably. Right now, I limit the estimation to shock parameters to eliminate the need to recompute the Jacobian (this is a much bigger issue). I also added a random walk MH algorithm at the bottom of the notebook. It seems finicky since the current log likelihood function references globals. This is definitely something I plan on addressing later this week. |
PyMC IntegrationI added another notebook to try interfacing with In my demo here the sampler takes about 8 minutes, and is significantly slower than the simple/stupid sampler in my other notebook. This is likely because of the "black box" inference of the modeling language. Why Use a PPL?The advantage to using a PPL (probablistic programming language) like I'm not too familiar with the Bayesian landscape in Python, so definitely let me know if there is anything more robust. |
@bbardoczy I figured out the problem with parameter inference in PyMC. Apparently each draw gets allocated to a zero dimensional numpy array as opposed to a scalar. This causes AccumulatedDerivative to error when multiplying, even though we are still technically multiplying a scalar. I can't change PyMC's behavior since it's technically not doing anything wrong, and in most cases is helpful for sampling efficiency. The problem is, I don't want to change simple_displacement.py since it may override expected behavior. I may need a little help understanding exactly what this design choice was. |
I see. My guess is that we could safely extend all the scalar operations of the AccumulatedDerivative class to also work with zero dimensional numpy arrays. You could try that and see if it breaks any of tests. There are some direct test of AccumulatedDerivatce in test_displacement_handlers.py. You should also check that the notebooks still run properly. They all use SimpleBlocks and SolvedBlocks. |
This is a WIP PR which adds a suite of estimation tools for SSJ. While this doesn't implement anything on the block level (as of writing), this does introduce a number of conventions which may be useful for likelihood evaluation regardless of how it's structured. These features are listed below:
rand
, a log likelihood methodlogpdf
, and a method to check whether a given draw lies within the existing supportin_support
. This design may change, any feedback is appreciated (especially for thePrior
class).Lastly, I added a notebook which demonstrates these methods in action. Some of these methods are rather clunky
(particularly when it comes to parameterization)but are completely generalizable and will at least get the ball rolling on future iterations.