You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the new tree, I am generating reactions that show up in a collision limit violators notebook.
For example:
HNOH(94) + CH2OH(45) <=> ONCO(964)
Arrhenius(A=(6.89518e+79,'cm^3/(mol*s)'), n=-21.017, Ea=(0,'kcal/mol'), T0=(1,'K'))
BM rule fitted to 2 training reactions at node Root_N-1R->H_1CNOS->N_Ext-2R-R_3R!H-u0_2R->C Total Standard Deviation in ln(k): 11.5401827615
Exact match found for rate rule [Root_N-1R->H_1CNOS->N_Ext-2R-R_3R!H-u0_2R->C]
Euclidian distance = 0
family: R_Recombination
Direction: forward
Violation factor: 2e+10
Violation condition: 423 K, 1.0 bar
There are several others, all exceeding the collision limit by a factor of more than 10^10.
And they all hit the same node, and have an Arrhenius A=(6.89518e+79,'cm^3/(mol*s)'), n=-21.017 rule fitted to 2 training reactions.
Once again, it has a huge A and big negative n. Same problem as before. Probably worked in a narrow range where fitted, but being extrapolated to a different T.
I wonder if the fitting might benefit from some form of regularization, to penalize huge n parameters and make things extrapolate better?
I wonder if the collision limit violation test could be automated for all these auto-generated rules?
So someone could at least look into fixing them before they show up in models.
The text was updated successfully, but these errors were encountered:
This one seems to be in violation even at higher temperatures T = 1000 K. Regularization is probably a bit dangerous in these situations with so few parameters. Enabling estimation to move up the tree when uncertainty is high may help. I could add a processing step analyzing generated rules for potential collision limit violations and refitting, it's tricky to do this generally, but we could probably detect most things. I'm not exactly sure how this one was possible. I'll try to take a look at this later this week.
With the new tree, I am generating reactions that show up in a collision limit violators notebook.
For example:
HNOH(94) + CH2OH(45) <=> ONCO(964)
Arrhenius(A=(6.89518e+79,'cm^3/(mol*s)'), n=-21.017, Ea=(0,'kcal/mol'), T0=(1,'K'))
BM rule fitted to 2 training reactions at node Root_N-1R->H_1CNOS->N_Ext-2R-R_3R!H-u0_2R->C Total Standard Deviation in ln(k): 11.5401827615
Exact match found for rate rule [Root_N-1R->H_1CNOS->N_Ext-2R-R_3R!H-u0_2R->C]
Euclidian distance = 0
family: R_Recombination
Direction: forward
Violation factor: 2e+10
Violation condition: 423 K, 1.0 bar
There are several others, all exceeding the collision limit by a factor of more than 10^10.
And they all hit the same node, and have an Arrhenius
A=(6.89518e+79,'cm^3/(mol*s)'), n=-21.017
rule fitted to 2 training reactions.Once again, it has a huge A and big negative n. Same problem as before. Probably worked in a narrow range where fitted, but being extrapolated to a different T.
I wonder if the fitting might benefit from some form of regularization, to penalize huge n parameters and make things extrapolate better?
I wonder if the collision limit violation test could be automated for all these auto-generated rules?
So someone could at least look into fixing them before they show up in models.
The text was updated successfully, but these errors were encountered: