Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convergence issues with mixing with a new eos #594

Open
tiny-hippo opened this issue Oct 31, 2023 · 3 comments
Open

Convergence issues with mixing with a new eos #594

tiny-hippo opened this issue Oct 31, 2023 · 3 comments

Comments

@tiny-hippo
Copy link

tiny-hippo commented Oct 31, 2023

We are trying to contribute a new equation of state (eos) to MESA for modeling giant planets, but we are experiencing convergence issues when there is mixing. More details are below. We’re happy to answer questions or provide more details if needed.

Implementation of the new eos

For the modeling of planets which often have large mass fractions of heavy elements, we implemented a new eos into MESA: Our eos combines the SCvH or CMS hydrogen-helium eos with the quotidian eos for water, using the linear mixing approximation. This is implemented outside of MESA as a python module, which we use to calculate eos tables in the MESA format.

To use our tables in MESA, we use the other_eos_frac and other_eos_component hooks. The code that loads and interpolates the tables is in large parts simply copied from the private eos routines (since our tables are in the same format). The main modification we made is to port the way X tables are interpolated to the Z tables (instead of only linear) and made the choice of interpolation method an inlist option.

For homogeneously mixed planets, or if we suppress mixing by setting mix_factor = 0, the module runs just fine and is consistent with MESA’s results. For example, we re-implemented MESA’s SCvH tables and got the same result with and without our custom eos module. The main two differences we encountered during testing were that 1.) create_initial_model takes a lot longer and 2.) evolving a planet takes a little bit longer.

The error

The error occurs when mixing a gradient in Z (or a steep gradient in Y) for a gaseous planet. Over many inlist setups, the simulation breaks from one of two errors:

After mixing some material, the solver’s residual of dlnE_dt (or sometimes equ_o16; we use o16 as our heavy element) increases drastically, accompanied by a large max correction in lnd (sometimes also max corr o16). Typical values for the residuals are about 1E+4 and for the max corrections about 5. This is followed by many (typically adjacent) cells throwing a hydro_mtx: logT too large (or small) error. As the error indicates the cells jump to completely unreasonable values. For example, a cell with logT = 4.52 would change to logT = -2.2 or logT = 8.5. These values are then rejected by the solver, throwing a retry: logT > hydro_mtx_max_allowed_logT error, and reducing the timestep. This pattern continues until the time step is too small and the simulation aborts.

Sometimes, the solver doesn’t terminate because of the hydro error, but because the residuals and max correction become extremely large and the solver simply rejects the solution. A typical example here would be:

27 3 coeff 1.0000
avg resid 0.108E+01
max resid dlnE_dt 1889 0.93888E+04
mix type 00000
avg corr 0.149E+03
max corr lnd 1886 0.62059E+06
mix type 00000

This also triggers a reduction in the timestep, which ultimately leads to the termination of the simulation. This error pattern occurs for both, our eos as well as MESA’s SCvH and CMS eos. We attached two files of the solver output for a model with a Z gradient and a Y gradient respectively.

Inlist Setup

As a default case, we use a gas giant of 1 Jupiter mass and an initial specific entropy of s = 10 kb/bary, but we also tested other entropies and found the same behavior. The inlist options we use are motivated by other test suite examples. We attached a typical inlist to evolve the model, too. Without going through every single inlist option we tested, we can say that this convergence problem persists over a wide range of options (e.g., different convergence tolerances or energy equation options). We tried smoothing the eos using different techniques, but this also didn’t help much. While we are able to run a Y gradient model with the right set of inlist options using our smoothed version of the CMS eos, we were only able to get one Z model to work: A model were we reduce the mixing factor by 1e-7 and don’t use convective premixing (or predictive mixing). However, all these “makeshift” solutions don’t address the underlying issue that the code struggles to mix even small amounts of Z material.

Any help is much appreciated, thanks!

output_Y_gradient_run.txt
output_Z_gradient_run.txt
inlist_evolve.txt

@evbauer
Copy link
Member

evbauer commented Nov 3, 2023

Thanks for this thorough report! I'm interested in taking a closer look, but haven't had a chance to dig into the details yet.

One quick question on a point I think you've already addressed, but I just want to be extra sure: have you tried energy_eqn_option = 'eps_grav'? And have you tried toggling include_composition_in_eps_grav when using the eps_grav form?

@evbauer
Copy link
Member

evbauer commented Nov 3, 2023

Also, have you had a chance to test whether the EOS is somehow returning bad partial derivatives that are causing the solver to struggle? It's kind of a lot to wade through, but you might be able to make some progress by following the solver debugging documentation here: https://docs.mesastar.org/en/latest/developing/debugging.html#diagnosing-solver-struggles

That at least gives a roadmap for how to turn on some debugging options that could test whether there are bad partial derivatives entering into the energy equation.

@Henrik-Knierim
Copy link

Henrik-Knierim commented Nov 6, 2023

Hey Evan,

thanks for taking a look! Yes, we already tried the eps_grav version of the energy equation with and without include_composition_in_eps_grav (as well as dedt). Both don't work and show similar errors as the one we described above.

I just went through the debugging and found that d_dlnE_dt(k)/d_lnR(k+1) and d_dlnE_dt(k)/d_lnT(k+1) are the two bad derivatives. The exact values are quite sensitive to the choice of solver_test_partials_dx_0; I attached the results for solver_test_partials_dx_0 = 6d-5 (including the eos_call_info).
debugging_partial_info.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants