Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add coregistration by minimization of NMAD #231

Closed
adehecq opened this issue Nov 12, 2021 · 9 comments · Fixed by #595
Closed

Add coregistration by minimization of NMAD #231

adehecq opened this issue Nov 12, 2021 · 9 comments · Fixed by #595
Labels
enhancement Feature improvement or request

Comments

@adehecq
Copy link
Member

adehecq commented Nov 12, 2021

A nice suggestion by Simon Gascoin (@sgascoin). An example of a very simple algorithm that finds a horizontal shift between two DEMs by minimizing the NMAD. A good alternative to the Nuth & Kaab.

import cv2
import numpy as np
import scipy.optimize as optimize

def shiftDEM(Z,shiftX,shiftY):
    M = np.float32([[1, 0, shiftX], [0, 1, shiftY]])
    shifted = cv2.warpAffine(Z, M, (Z.shape[1], Z.shape[0]),borderValue=np.nan)
    return shifted

def nmad(x):
    return 1.4826*np.nanmedian(abs(x - np.nanmedian(x)))

def errNmad(params, Zslave, Zmaster):
    shiftX, shiftY = params
    M = np.float32([[1, 0, shiftX], [0, 1, shiftY]])
    shifted = shiftDEM(Zslave,shiftX,shiftY)
    return nmad(shifted-Zmaster)

initial_guess = [1, 1] # shift in pixels
result = optimize.minimize(errNmad, initial_guess, args=(Zwinter, Zsummer),method='Nelder-Mead')
if result.success:
    print("[shiftX, shiftY]", result.x)
else:
    raise ValueError(result.message)
@adehecq adehecq added the enhancement Feature improvement or request label Nov 12, 2021
@rhugonnet
Copy link
Member

Some news: This method suggested by @sgascoin is now fully implemented/tested in xDEM. I now realize it matches the implementation by @liuh886 and @erikmannerfelt (and with my modifications in #530 to make it match the modularity of other methods). The tests show it indeed converges to the same shifts as the Nuth and Kääb fairly fast. A new (re-vamped) xDEM documentation will be out in a couple weeks including this.
It is also the method used by @SmithB in his https://github.com/SmithB/pointCollection package, if I understood the code correctly?

I am not sure about the current name given to this method in xDEM, however: "GradientDescending".
@sgascoin @SmithB @liuh886 What name do you think we should give to this method? And do you know what would be the "first" scientific reference we should cite for this?

@sgascoin
Copy link

Hello Romain, well done! It's great that you implemented it in xdem, I wonder if convergence fails in some cases. I used this method to compute a snow depth image for a study that we submitted last week, but we did not show the comparison with N&K so it's probably not enough for a proper reference.

The Nelder-Mead method is not a gradient descent method so I would chose another name. Also it's easy to switch to another method from the scipy library so maybe I would just call it "minimize"..

Thanks!

@liuh886
Copy link
Contributor

liuh886 commented Sep 12, 2024

Hello, @rhugonnet . In my view, the key point is that coregistration can be regarded as a bound-restricted minimizing problem (very typical when viewed in 3D space), which can be solved by different optimization algorithms, the most typical of which is gradient-based. I have tried several algorithms from SciPy, currently, stochastic gradient descent is one of the best.

And the NMAD is the loss function; other functions like RMSE... or combining the two also work.

At first, gradient descent coregistration (GDC) was designed for cloud point-DEM coregistration and was working well; at the time, xdem struggled to handle coregistration between ICESat-2 and DEM. It is also a good alternative for DEM-DEM coregistration if the slope/curvature calculation takes a long time...

I explained my methods in the supplementary of my snow paper but I'm still struggling to submit it -(

@rhugonnet
Copy link
Member

rhugonnet commented Sep 12, 2024

Thanks both for the feedback! 🙂
For info, we now allow the user to pass any fit_optimizer for all coregistration methods (which also includes the loss function), this way all approaches are fully modular and durable (can pass the optimizer from a complex ML package if you like, or just use SciPy's default).
In that sense, the minimization that happens during ICP or Nuth and Kääb is no different than for this method (can be a gradient descent or anything else), although we can specify a different default optimizer if we think it impacts performance significantly for the user using default parameters.

As the gradient descent aspect is not a specificity of that method (just one type of optimizer), I would join @sgascoin in trying instead to capture the nature of the minimization problem as formulated in this method in the name.
Maybe something like "ElevationDifferenceMinimization" or "DhMinimize"?
Any other idea on this @adehecq?
The second aspect of this method is that it only aims to optimize horizontal shifts, so could be "DhMinimizeHorizontal" or something alike.

For a reference: I think @SmithB has been using this method for a while, any idea there? (could be one of your own papers)

@SmithB
Copy link

SmithB commented Sep 12, 2024 via email

@rhugonnet
Copy link
Member

Good to know @SmithB, thanks! I will mirror the default optimizer of other methods, then. (I might do some more testing on this for the DEM coreg study, and can adapt then if necessary.)
And I'll move forward with DhMinimize if that works for everyone.

@erikmannerfelt
Copy link
Contributor

@rhugonnet, can the repo be simplified by having one main class with high modularity and then "opinionated subclasses" such as GradientDescending? Because the only difference between these would be the optimizer, right?

@rhugonnet
Copy link
Member

Between a generic DhMinimize (any optimizer) and the current GradientDescending (noisyopt), the only difference would be the default optimizer, yes!

But then creating a GradientDescendingDhMinimize as a subclass of DhMinimize just to change the default optimizer that can be specified in DhMinimize(fit_optimizer=*) seems like it'd be overshoot...
Maybe instead we wrap a preferred optimizer in fit.py, and make it the default (if we find that it has a big influence for this method, otherwise we leave the SciPy default used elsewhere). Should be able to assess this soon 😉.

@erikmannerfelt
Copy link
Contributor

Between a generic DhMinimize (any optimizer) and the current GradientDescending (noisyopt), the only difference would be the default optimizer, yes!

But then creating a GradientDescendingDhMinimize as a subclass of DhMinimize just to change the default optimizer that can be specified in DhMinimize(fit_optimizer=*) seems like it'd be overshoot... Maybe instead we wrap a preferred optimizer in fit.py, and make it the default (if we find that it has a big influence for this method, otherwise we leave the SciPy default used elsewhere). Should be able to assess this soon 😉.

I'd say either of these alternatives sound good to me actually! But if any of them are implemented I'm happy :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Feature improvement or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants