You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if this is really an issue, just something I noticed.
Depending on the type of Jacobian there are 2-3 evaluations per optimization step for SLSQP(). When using the cadet-process optimizer with SLSQP() the progress report by the callback_function defined in scipyAdapter.py will only ever report the last evaluation of one optimization step (I think?). This leads to the given x0 seemingly not being evaluated, which confused me a little as I thought I would be giving it in the wrong format or something similar, although when debugging I saw that all 3 evaluations are performed correctly.
From using other optimizers the user might expect to see the chosen x0 evaluation in the first iteration.
EDIT:
The issue might be more important than initially thought, because the last function value of the SLSQP() is not given as the result.x but rather the last assessment of the gradient is given as a result.
The text was updated successfully, but these errors were encountered:
Not sure if this is really an issue, just something I noticed.
Depending on the type of Jacobian there are 2-3 evaluations per optimization step for SLSQP(). When using the cadet-process optimizer with SLSQP() the progress report by the
callback_function
defined inscipyAdapter.py
will only ever report the last evaluation of one optimization step (I think?). This leads to the given x0 seemingly not being evaluated, which confused me a little as I thought I would be giving it in the wrong format or something similar, although when debugging I saw that all 3 evaluations are performed correctly.From using other optimizers the user might expect to see the chosen x0 evaluation in the first iteration.
EDIT:
The issue might be more important than initially thought, because the last function value of the SLSQP() is not given as the result.x but rather the last assessment of the gradient is given as a result.
The text was updated successfully, but these errors were encountered: