Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Failing test cases #51

Open
liunelson opened this issue Nov 7, 2024 · 11 comments
Open

[BUG]: Failing test cases #51

liunelson opened this issue Nov 7, 2024 · 11 comments

Comments

@liunelson
Copy link

liunelson commented Nov 7, 2024

We have three test cases for which Funman fails to produce expected results.

Case 1

I start with a simple SIR model and got a reasonable response from Funman. However, the parameter distributions in the contracted_model do not match with the parameter lower & upper bounds in the only true box (beta ~ [0.1, 0.49], gamma ~ [0.05, 0.355]). In fact, the contracted_model distributions appear to be identical to those from the input model.

Screenshot 2024-11-07 at 10 25 55 AM

response1.json

Case 2

Using the same settings as Case 1, except where the number of timesteps is increased from 5 to 6 (or any integer > 5), Funman responds with [] for all types of boxes.

Screenshot 2024-11-07 at 10 35 31 AM

response2.json

Case 3

Here, I use a non-normalized SIR model - that is, the state variables are in units of person, not dimensionless fractions of the total population. Funman responds with trajectories that do not correspond to those produced by pyciemss.sample.

For example, I(t = 35) should be around 1000, not 400.
Screenshot 2024-11-07 at 10 49 41 AM

response3.json

Case 4

This request, which seems to perfectly acceptable, causes Funman to hang forever for no apparent reason.

request4.json

@liunelson
Copy link
Author

@danbryce @dmosaphir
Please take a look when available!

@dmosaphir
Copy link
Contributor

Hi @liunelson , thanks for bringing these to our attention! I'm taking a look at Case 3 and I'm going to try simulating it at a coarser granularity to see if it's a numerical precision issue. I just have a question about the rate equations in response3.json.

The initials seem to have S0 = 2000, I0 = 1, R0 = 0 so the total population N = 2001. Based on that, I was expecting an infection rate equation of (1/N)*beta*S*I which would be (1/2001)*beta*S*I or approximately 0.0005*beta*S*I but the rate law (line 104) has 0.004*I*S*beta. I was wondering how to reconcile these? It seems like the rate law in the AMR is meant for a population size of N = 250.

@liunelson
Copy link
Author

liunelson commented Nov 8, 2024

I don't actually know where the Case 3 model originated. I think the main issue is still the discrepancy between Funman's solution and the PyCIEMSS one. A lack of numerical precision could be the reason; however, Case 2's "number of time steps > 5" bug prevents us from testing that idea out directly in Terarium.

@dmosaphir
Copy link
Contributor

Just following up to confirm that I'm seeing numerical instability when I fix the step size at 7. I can investigate with other parameter choices, but this was with beta=0.04 and gamma=0.02. We can do some further troubleshooting to see what FUNMAN is doing, but I imagine this has something to do with it.
Screenshot 2024-11-08 at 12 11 57 PM

@danbryce
Copy link
Collaborator

In Case 1, the tolerance is 0.2. This tolerance dictates the minimum size of a box that funman will consider labeling. If it's 0.2, that means it must be at least 20% of the width of the original parameter space. Running with that tolerance results in a parameter space that only includes a few true boxes. Reducing it to 0.05 provides many more boxes of both true or false label. However, the results bound the true boxes so that the minimum and maximum value in each dimension are the same as the original parameter space. See:

0e8b0aac-f535-4b54-97cc-99fa8756f60b_parameter_space

In order to shrink the bounds, the constraints need to limit the maximum or minimum true box value in some dimension.

@danbryce
Copy link
Collaborator

@liunelson The second test case uses timepoints that differ from those that are pictured in the screenshot. The file response2.json has the timepoints :

"timepoints": [
                0,
                1.6666666666666667,
                3.3333333333333335,
                5,
                6.666666666666667,
                8.333333333333334,
                10
              ]

We don't support timepoints that are floats.

After changing to ints, I found that it was causing our subsolver dreal to segfault in a hard to reproduce manner. I'll diagnose.

@danbryce
Copy link
Collaborator

In case 3, with

"beta": 0.03525986784610763,
"gamma": 0.023246455702763517,

I get:

"I_35": 169.71944441458044

When I set:

"beta": 0.05,
"gamma": 0.01,

I get:

"I_35": 652.8321221878875

This suggests that we are returning a choice of beta and gamma you aren't expecting, and that is not identical to the one used by pyciems. I don't think this is a bug. Is there something different you want?

@danbryce
Copy link
Collaborator

Case 4 may have been fixed by my recent changes. It completed for me in about 2.25 minutes.

@liunelson
Copy link
Author

You're right about the mistake of mine.

I do still get the same empty-list box response when I use integer timepoints. For example, 0-1-2-3-..-10.

@danbryce
Copy link
Collaborator

I found a slight wrinkle in the logic for computing the contracted model. It is implemented correctly, except that it uses the bounds on boxes that are true at any timepoint. Typically, all parameter values are encapsulated by a true box in the initial state. This would cause the contracted model to have the same bounds as the original.

I have updated the logic so that the contracted model only uses true boxes at the latest time step. The true boxes collectively diminish in volume over time, so the true boxes at the last time step will correspond to true boxes at earlier time steps. If you access the contracted model before funman has finished, it may look like the parameter bounds are overly narrow. This is because the algorithm will continue to add true boxes at the last time step until it is done.

New commit coming soon.

@danbryce
Copy link
Collaborator

@liunelson I have just merged a fix into develop that addresses the issue with the contracted model. I believe everything in this issue has been addressed. It is ready to be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants