Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor GradientDescending to DhMinimize with generic inputs #595

Merged
merged 11 commits into from
Oct 6, 2024

Conversation

rhugonnet
Copy link
Member

@rhugonnet rhugonnet commented Sep 21, 2024

This PR performs the renaming and generalization as discussed in #231

@liuh886 @erikmannerfelt I indeed found a fairly big difference in performance between scipy.optimize.minimize() and noisyopt.minimizeCompass(). For SciPy, the "Nelder-Mead" method examplified by @sgascoin in #231 performed best of all, but it still had some trouble consistently correcting an artificial shift introduced in our Longyearbyen example data at different shift magnitudes and subsample sizes (even when passing bounds and various tolerances), while minimizeCompass performed OK on all cases. So between the default minimize() and minimizeCompass(), there is a big difference in performance, but the difference is much smaller between minimize(method="Nelder-Mead") and minimizeCompass.

So, to avoid imposing the noisopt dependency on users that want to try this function, I was thinking of having the new function DhMinimize with default to minimize(method="Nelder-Mead"), and adding a "Tip" box in the documentation (Coregistration page, new "DhMinimize" section) to remind that noisyopt might perform better.
And a similar note in the function description.

Does that sound good to all?

Opens #596
Opens #597
Resolves #231
Resolves #594
Resolves #599

@rhugonnet rhugonnet marked this pull request as ready for review September 30, 2024 22:21
@sgascoin
Copy link

sgascoin commented Oct 1, 2024

Thank you Romain, regarding the Nelder-Mead optimizer I found that it is sensitive to the initial vector, in particular I noticed that it can fail if the initial components are zero. That is why I always initialize with [1,1].

@rhugonnet
Copy link
Member Author

Wow, great tip! 🤩 This indeed solved the issues that Nelder-Mead had in some cases. It now performs similarly as noisyopt. And so I've also made it the default in tests.
Thanks a lot @sgascoin!

Copy link
Contributor

@erikmannerfelt erikmannerfelt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice, and as you can see from my tiny comments, I basically approve of everything!

There's only one thing I wish and that's for some backward compatibility. As far as I understand (but please correct me if I'm wrong), this removes the GradientDescending class entirely from the repo. I know a few people here using it so it's a pretty breaking change. I suggest to make this a very simple subclass:

class GradientDescending(DhMinimize):
    def __init__(
        self,
        x0: tuple[float, float] = (0, 0),
        bounds: tuple[float, float] = (-10, 10),
        deltainit: int = 2,
        deltatol: float = 0.004,
        feps: float = 0.0001,
        subsample: int | float = 5e5
)
        """The old docstring here."""
        cls = super().__init__(subsample=subsample, fit_minimizer=minimizeCompass)
        cls.meta.update({
            "x0": x0,
            "bounds": bounds,
            "deltainit": deltainit,
            "feps": feps,
        })
        return cls

As far as I can tell, these relatively few lines are enough to retain the previous functionality which I know is used. But now we have the alternative to (through DhMinimize) use new minimizers/loss functions thanks to you!

xdem/coreg/affine.py Outdated Show resolved Hide resolved
tests/test_coreg/test_affine.py Show resolved Hide resolved
@rhugonnet
Copy link
Member Author

rhugonnet commented Oct 3, 2024

Perfect, thanks!

Actually, thinking about implementing your proposition of backwards-comp above, I realize we can't easily make an equivalent. All optimizer arguments have to be passed to the kwargs of Coreg.fit() now, and not its __init__. And there is no dictionary to save those temporarily, we'd have to overwrite the _fit_xxx() and create a new attribute specific to this subclass as well.

As the GradientDescending method was neither shown in the API, or documented, these changes should affect very few users. I was simply planning on listing breaking changes in the 0.1 release. (especially as we have a warning on the landing page that states that xDEM is very much in-development, and to note the version number to reproduce results.)

So either updating the code, or forcing an older xDEM version both seem like good solutions here.
Is that OK?

For clarity the update would just be:

c = GradientDescending(my_optimizer_args)
c.fit()

to

c = DhMinimize(fit_optimizer=minimizeCompass)
c.fit(my_optimizer_args)

@rhugonnet
Copy link
Member Author

Merging this to finalize #502, can add backwards-comp in separate PR if still feels necessary.

@rhugonnet rhugonnet merged commit 3287594 into GlacioHack:main Oct 6, 2024
22 checks passed
@rhugonnet rhugonnet deleted the refactor_gradientdesc branch October 6, 2024 20:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants