-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor GradientDescending
to DhMinimize
with generic inputs
#595
Conversation
Thank you Romain, regarding the Nelder-Mead optimizer I found that it is sensitive to the initial vector, in particular I noticed that it can fail if the initial components are zero. That is why I always initialize with [1,1]. |
Wow, great tip! 🤩 This indeed solved the issues that Nelder-Mead had in some cases. It now performs similarly as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, and as you can see from my tiny comments, I basically approve of everything!
There's only one thing I wish and that's for some backward compatibility. As far as I understand (but please correct me if I'm wrong), this removes the GradientDescending class entirely from the repo. I know a few people here using it so it's a pretty breaking change. I suggest to make this a very simple subclass:
class GradientDescending(DhMinimize):
def __init__(
self,
x0: tuple[float, float] = (0, 0),
bounds: tuple[float, float] = (-10, 10),
deltainit: int = 2,
deltatol: float = 0.004,
feps: float = 0.0001,
subsample: int | float = 5e5
)
"""The old docstring here."""
cls = super().__init__(subsample=subsample, fit_minimizer=minimizeCompass)
cls.meta.update({
"x0": x0,
"bounds": bounds,
"deltainit": deltainit,
"feps": feps,
})
return cls
As far as I can tell, these relatively few lines are enough to retain the previous functionality which I know is used. But now we have the alternative to (through DhMinimize
) use new minimizers/loss functions thanks to you!
Perfect, thanks! Actually, thinking about implementing your proposition of backwards-comp above, I realize we can't easily make an equivalent. All optimizer arguments have to be passed to the As the So either updating the code, or forcing an older xDEM version both seem like good solutions here. For clarity the update would just be: c = GradientDescending(my_optimizer_args)
c.fit() to c = DhMinimize(fit_optimizer=minimizeCompass)
c.fit(my_optimizer_args) |
Merging this to finalize #502, can add backwards-comp in separate PR if still feels necessary. |
This PR performs the renaming and generalization as discussed in #231
@liuh886 @erikmannerfelt I indeed found a fairly big difference in performance between
scipy.optimize.minimize()
andnoisyopt.minimizeCompass()
. For SciPy, the "Nelder-Mead" method examplified by @sgascoin in #231 performed best of all, but it still had some trouble consistently correcting an artificial shift introduced in our Longyearbyen example data at different shift magnitudes and subsample sizes (even when passing bounds and various tolerances), whileminimizeCompass
performed OK on all cases. So between the defaultminimize()
andminimizeCompass()
, there is a big difference in performance, but the difference is much smaller betweenminimize(method="Nelder-Mead")
andminimizeCompass
.So, to avoid imposing the
noisopt
dependency on users that want to try this function, I was thinking of having the new functionDhMinimize
with default tominimize(method="Nelder-Mead")
, and adding a "Tip" box in the documentation (Coregistration page, new "DhMinimize" section) to remind thatnoisyopt
might perform better.And a similar note in the function description.
Does that sound good to all?
Opens #596
Opens #597
Resolves #231
Resolves #594
Resolves #599