Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

need test case for rotating core-collapse progenitor? #651

Open
mathren opened this issue Jun 4, 2024 · 11 comments
Open

need test case for rotating core-collapse progenitor? #651

mathren opened this issue Jun 4, 2024 · 11 comments

Comments

@mathren
Copy link
Contributor

mathren commented Jun 4, 2024

The attached figure shows a (not-so-great) resolution test for a Z=0.001, 40Msun star initially rotating at ~0.6 critical which experiences chemically homogeneous evolution. The figure only shows the profile of specific angular momentum, density, and velocity of the inner 5Msun and the numbers in the legend are the number of zones at this particular snapshot.

This is a rather convoluted setup (damping spurious velocities in the layers not in sonic contact in the core and trying to add resolution for a bunch of gradients), but very similar to the setup here developed with input from @aurimontem . The point that worries me is the "spiky" infall velocity profile (more visible in the blue, but it's coming up in the orange and hidden a bit by the dashed linestyle).

I have been noticing this in many rotating models (even with less extreme rotation rates), but I just double checked that the test cases that go to core-collapse in r24.03.1 instead exhibit smooth velocity profiles, however they are also non-rotating.

I suspect this has to do with the rotation playing with the lagrangian hydrodynamics.
I would like to know if any of the developers has suggestions on how to fix this (@Debraheem maybe?), and if/once fixed, if there is interest in molding this into a test case for rotating core-collapse progenitors:

image

@Debraheem
Copy link
Member

Debraheem commented Jun 4, 2024

Thanks for documenting this. What network did you use for this run in particular? Can you overplot the logT profile for this run? Are you using split_burn, if so what is the min T for splitting hydro and burning? I'll try to reproduce this locally, but if you have a working model directory I can run, that would make it easier to compare. I'll use the zenodo you shared to test.

@mathren
Copy link
Contributor Author

mathren commented Jun 4, 2024

The zenodo should be close enough to reproduce the issue (and that one I got to onset of core-collapse as defined there with many nets and resolutions).

This is with approx21_plus_cr56.net (classic case of "do what I say, not what I do", but in my defense, it's all testing). We do use split_burn from logT=9 onwards, and hydro (as in v_flag) from the very beginning.

I can produce the plot, but I expect logT to be as smooth as logRho here (and we are nearing fe_core_infall, the issue is roughly in the Si-rich layer, so logT~9.3 or so)

@mathren
Copy link
Contributor Author

mathren commented Jun 4, 2024

added bottom panel with temperature (for the inner 5Msun):

image

@Debraheem
Copy link
Member

Given your higher resolution model (dashed line) doesn't appear to show this issue, do you think this is perhaps related directly to the mesh?

@Debraheem
Copy link
Member

I tried adopting identical controls to the zenodo entry, except in mesh, and recreated a model to try to test this with w/wcrit = 0.6 at ZAMS. I attached the model directory (where I used ./run_all to run it to cc).

Unfortunately, I could not replicate the velocity bumps you are seeing. I've attached a movie and the model file. I wonder how I can trigger it?

Here is 10 timsteps to cc
velocity_10timesteps_to_cc

Here is at cc.
velocity_at_cc

movie.mp4

40M_pre_ms_to_core_collapse_split_1d9K_0.6wcrit.zip

@mathren
Copy link
Contributor Author

mathren commented Jun 5, 2024

Sorry for the delay. Weird...but from the D_mix plot it may seem you turned off some mixing processes?

Here is the exact work directory I've been playing with (including a custom named photo I used as a starting point for the plots above, roughly at logT=8.95). Compared to the work directory you pulled from zenodo, this one adds custom mesh functions to try to resolve gradients (maybe a foolish errand).

The orange lines above have the lines below !res test in inlist1 uncommented, the blue ones have them commented (this means *_delta_coeff=0.75 or 1.0 respectively basically)

MWE.zip

@mathren
Copy link
Contributor Author

mathren commented Jun 5, 2024

will try to compare your setup to this line by line (but probably not today)

@Debraheem
Copy link
Member

Debraheem commented Jun 6, 2024

Thanks, and no rush, I won't be able to get back to this for a while.

Weird...but from the D_mix plot it may seem you turned off some mixing processes?

in my &pgstar
Mixing_show_rotation_details = .false.

@mathren
Copy link
Contributor Author

mathren commented Jun 8, 2024

ok, I have an idea of what may be causing this in my setup (and if I am right it would be something I introduced and not a MESA problem).

I am finding the location of the innermost core (CO, Si or Fe) using the variables in MESA based on the abundance thresholds defined in controls.defaults, and then at each timestep I am finding the layer above the innermost core in sonic contact within the current timestep and setting velocity_q_upper_bound to the value corresponding to that location.

When the Fe core develops, si shell burning initially makes its value fluctuate significantly, that may propagate in fluctuations of the velocity_q_upper_bound and "chop" the velocity profile. I'm making some tests to verify this is the case, and if so, this was a problem I 100% introduced.

But still, having a test case for rotating stars reaching core-collapse may be a good idea regardless.

@mathren
Copy link
Contributor Author

mathren commented Jun 8, 2024

image

Yes, indeed, it seems like I caused the issue trying to be too smart and avoid velocity spikes in the envelope.
I'm not closing this as solved yet, since we may want to iterate a bit to make a test_case out of these models maybe?

@Debraheem
Copy link
Member

At least we have identified the potential source of the issue! Perhaps we can make a test case out of the example I shared, which is a modified version of the 20Msun_pms_to_cc test_suite. I'll check internally to see if we have the bandwidth for this as these massive star -> cc test cases typically take a while to run.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants