-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upon running with 'resume', Ultranest terminates the run with the message 'No changes made. Probably the strategy was to explore in the remainder, but it is irrelevant already; try decreasing frac_remain.' #80
Comments
Yes, it looks like it thinks on the new run that the run converged. This is also subject to the random numbers that estimate the volume shrinkage in nested sampling. Since your posterior uncertainties are zero or extremely small it looks like you have a problem with your likelihood being extremely spiked. Probably you are underfitting the data, i.e., the model is wrong. Maybe add a term and parameter that adds extra model/data uncertainty. This should also help convergence speed. |
The "No changes made. Probably the strategy was to explore in the remainder, but it is irrelevant already; try decreasing frac_remain." can occur when |
The latest ultranest version is 3.5.7 btw |
Hi. About your comment that we might be underfitting the data, actually, we have many cases where the model that is being fit is the same model that the data came from ie the model is the correct one. In these cases, we ought to get convergence, but we get the above behavior. Also, we have seen that if we use a subset of the time series (without changing the model) then this problem goes away; the convergence happens normally with posterior that are not delta-functions. We have also seen that for some time series the convergence happened normally, but upon modifying the uncertainties/noise of the data, we again start seeing this behavior. Can there be some other source to this problem than our model choice? |
I think you have to take a closer look at your likelihood function. Take one of your problematic cases, and find the point of maximum likelihood (probably also the maximum posterior). Then modify one of the values in very small steps. Your likelihood seems to be extremely sensitive to slight modifications away from that peak. So take a look inside your likelihood function what data points or uncertainties cause this extreme sensitivity. |
Description
I have time series data, and I am fitting them using models of 9 and 15 parameters. The fitting works okay for most of the data, but at the same time, for a large portion of them, the fitting runs for 96 hours, at which point it hits the wall-time of the HPC that I am using. Then I rerun the fitting of these time series using the 'resume' feature. The program runs for about 10 minutes, but then it terminates (with Exit_status = 0), and the final output of the run looks like the following.
I just wanted to understand the meaning of this. Does this mean that the fitting has already converged, and I can use the model parameter estimates? If yes, then why did the program keep running for 96 hours? If no, then how can I avoid this situation? By decreasing frac_remain as suggested in the output above?
PS: This doesn't happen every time I use the resume feature. In many cases, the fitting resumes okay, runs for several hours, and terminates successfully and normally.
The text was updated successfully, but these errors were encountered: