Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Academic Question: How does the fuse approach relate to Moving Horizon Estimation #57

Open
beetleskin opened this issue May 23, 2019 · 2 comments

Comments

@beetleskin
Copy link

How does the fuse approach relate to Moving Horizon Estimation?

@svwilliams
Copy link
Contributor

A big caveat before I answer: I am not overly familiar with MHE. Everything that follows is from me skimming the Wikipedia page for five minutes.

MHE sounds very similar to a fixed-lag smoother, where the state estimates for the last N seconds are found by minimizing the measurement errors inside of that window. As N approaches infinity, the fixed-lag smoother becomes a full SLAM/batch optimization system. And as N approaches 0, the fixed-lag smoother becomes a classic extended Kalman filter.

In a fixed-lag smoother, the variables that "fall out the back" of the time window are marginalized out of the system. This propagates the information from the previous measurements that are no longer inside the time window onto the variables that remain inside of the window. In absence of additional information or measurements, the optimal solution of the remaining variables stays the same before and after the marginalization process. It sounds like MHE handles this aspect of the rolling time window differently. Possibly by adding a prior distribution around one or more of the previously optimized states.

The objective function in MHE seems largely analogous to that of typical Bayesian/least squares approaches:
image
w_y * (x - y)^2: That looks like a sensor model, describing the error between a measurement and a predicted measurement based on the state variables. And w_y would be the inverse of the measurement noise covariance.
w_x * (x - x_hat)^2: Similarly, that looks like a motion model describing the error between the current state and a predicted state based on some kinematic/dynamic model. And w_x would be the inverse of the process noise covariance.
w_p * delta_p^2: This term is not typically included in SLAM formulations, but seems similar to a "regularization" term.

However, I'm less clear on MHE's optimization scheme for that objective function. The formulation looks like a standard least-squares form. However, the Wikipedia article mentions using an Euler-Lagrange method to "explore state trajectories". So MHE may select candidate solutions differently than the "move in the direction of the Jacobian" method typical of nonlinear least-squares solvers.

fuse itself is more of a framework for constructing Bayesian/least squares optimization problems, and less of an "approach". Users can create reusable sensor model and motion model "factories" that send individual cost functions to an optimizer. Currently fuse only provides a full SLAM/batch optimizer, though a fixed-lag smoother is in the works. Internally, the fuse optimizers utilize Ceres Solver to perform the actual nonlinear least-squares optimization to solve for the optimal state values.

That said, one could probably approximate MHE using fuse:

  • w_y * (x - y)^2 terms would be implemented by one or more derived fuse_core::SensorModel objects
  • w_x * (x - x_hat)^2 terms would be implemented by one or more derived fuse_core::MotionModel objects
  • p would be probably be a custom variable type, which would derive from fuse_core::Variable
  • w_p * delta_p^2 would be implemented by either a derived fuse_core::SensorModel or a derived fuse_core::MotionModel. I'm not familiar enough with MHE to really say. It depends on whether p variables are added to the problem on their own (like a sensor), or if they are added in response to a new state being created (like a motion model). I suspect they act more like a motion model.

Ultimately all of these cost terms would be minimized using a nonlinear least-squares solver instead of whatever optimization scheme is prescribed by MHE. But I suspect the real "magic" of MHE is in the regularizing terms.

@beetleskin
Copy link
Author

Thanks for the detailed information, this helps a lot.

Afaik, MHE is more or less just the approach, the choice of the solver is not pre-defined. Its also somewhat inverse to Model Predictive Control, in which the problem is just inverted (find best control values, such that my trajectory adheres this and that constraints and minimizes J).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants