-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add liekf example for se2_localization #308
base: devel
Are you sure you want to change the base?
Conversation
examples/se2_localization_liekf.cpp
Outdated
H.topLeftCorner<2, 2>() = Matrix2d::Identity(); | ||
H.topRightCorner<2, 1>() = manif::skew(1.0) * X.translation(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the expectation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment I guess will not appear in the final commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this comment will not appear in the commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This matrix isn't right. See e.g. chapter "4 Simplified car example" in "The Invariant Extended Kalman filter as a stable observer", A. Barrau, S. Bonnabel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This measurement model is of the form y = x.b + v
and is thus left invariant. If I'm not mistaken, H
should then be derived with the right-plus (which correspond to left invariance):
If you work the math you should find something along the lines of:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok this makes more sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then you need the Jacobian of X.inv(y-h(X))
, and not of h(X)
. OK, to find such Jacobian, just use the rules for right-jacobians in the paper, you certainly will find [-I, 0]
I guess. Just need to know if you need to differentiate wrt. X
, or wrt. \hat X
-- I am unsure at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get the desired result from the Jacobian of
Am I doing this correctly? Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the papers by Axel Barrau and Silvere Bonnabel (2017, 2018) and Jonathan Arsenault (2019), I believe I finally understand the concepts. Let me use this example to summarize my findings and compare the results with the manif paper.
Example: SE(2) with GPS-like measurement
Recap the setting:
We consider the robot pose
The control signal
The GPS measurement
where
Let
Predict step:
Axel and Silvere's approach:
They define the left-invariant error as
One can find a
The Jacobians are given by
Thus, the predicted state and error covariance are computed as follows:
manif approach:
Their approach involves defining the error on the tangent space and applying the error-state extended Kalman filter on the Lie Group. To facilitate this, they introduce the concepts of right-plus and right-minus operators as follows:
Next, using Taylor expansion of
Let
We can now rewrite the earlier equation as:
where the Jacobians are defined as follows:
For
For
Note that in the manif approach, the
Therefore, the predicted state and error covariance are
which match exactly with the results in Axel and Silvere's approach.
Update step:
Axel and Silvere's approach:
Now, define the innovation
where
You will obtain this result by removing the last row of zeros in these matries.
In addition, we have
where
Thus, the updated state and error covariance are
manif approach:
Define the innovation
Now let
So, we can rewrite the innovation as
$$z_{t} = H_{t}{}^{\mathcal{x}}\xi_{t}+V_{t}\delta_{t}\\$$where
and
Furthermore,, we have
where
Thus, the updated state and error covariance are given by:
Summary:
Axel and Silvere | Manif | |
---|---|---|
error |
|
|
Predict |
|
|
Update |
|
|
Noticing the sign difference in the update step, one should be very careful about how the error is defined when implementing the invariant kalman filter.
Also, in this example, since the state error in both of the predict step and the update step is left-invariant, there is no need to perform any covariance transformation from left-invariant to right-invariant or vice versa.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments about left-right invariances require answers from the author of the MR.
examples/se2_localization_liekf.cpp
Outdated
H.topLeftCorner<2, 2>() = Matrix2d::Identity(); | ||
H.topRightCorner<2, 1>() = manif::skew(1.0) * X.translation(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment I guess will not appear in the final commits.
Sorry forget to post the simulation results here. The results obtained from a 10 step simulation of this example is shown below
|
These results are non conclusive. The unfiltered solution has basically the same error as the filtered one, meaning that the filter did not do a great job. You should try to tune the noises so that an unfiltered solution departs more from the nominal trajectory. Then we can evaluate the effect of the filter. As it is now, the filter has no effect and we cannot assess if it is working correctly or not. You can also try. with a longer experiment to give time to the noisy unfiltered trajectory to deviate from the nominal one. |
I have increased the number of steps from 10 to 50 while keeping all the noise levels the same. I observed that there are instances where the values of
|
You can increase the process noise, e.g. by a factor 10 (cov by a factor 100), and/or reduce the measurement noise, by the same factor. This way you emphasize the effect of the GNSS corrections. With the longer simu, things start to be visible. But I think more evidence is needed. Try one or both of the ideas above. Keep the length the same as in the long run you did now. |
Thank you. I increased the control noise by a factor of 10 and reduced the measurement noise by a factor of 10. Here are the results from the new simulation. I appreciate your insight.
|
The orientation should be observable through the combination of motion model (which has a sense of direction of the robot) and the GPS. As long as the angular process noise is not too large, the orientation should be observable. You can reduce angular process noise by 10 and try over. We want to see that the unfiltered one diverges unboundedly (even if slowly slowly) while the filtered one stays close to the nominal. This is not yet visible in your results. Also please report the tuning values of Position estimates X and Y do converge to the nominal values, so they are good. |
Could you please check if the result make sense? I have changed only the value of
|
So here is what happens. You are using motion steps of 10cm forward, 0.1rad left. Then you are using perturbations, at each step, of 1m forwards and sideways, and up to 1rad turning. the noises are much larger than the motion steps. Under these conditions, there is no way one can guess the orientation of the vehicle from its trajectory. the motion is just too erratic. XY converge because of GNSS. Theta is practically unobservable. To make theta observable, you need to make the XY noise smaller than the motion step, which is 10cm. So I recommend u_sigma 0.1, 0.1 for the XY maximum. Then for theta, you already trien 0.05 and 0.1. they should be about right, but in the upper limit. I will accept this PR. You can try to lower u_sigma to not be larger than u_nominal and see if you can make theta observable. You may need long trajectories to allow X_unfiltered to diverge while X_filtered stays around X_nominal. Thanks for the effort. |
Thank you! I forgot to check whether the number of motion steps make sense after multiplying the control by a factor of 10. I understand that perturbations should be much smaller compared to the motions. I have conducted another simulation with 1,000 steps, and the setup was configured as requested. Based on this, I believe there is sufficient evidence to conclude that the filter is working properly. Really appreciate your insight and help!
|
examples/se2_localization_liekf.cpp
Outdated
dx = K * z; | ||
|
||
// Update | ||
X = X.lplus(dx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this should be X.lplus(-dx)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see my comment here: #308 (comment)
Yes, the last step is wrong |
Update the PR based on the discussion above. The following results are for 1000 steps.
|
This PR provides an example of the left IEKF method based on position measurements (GPS-like). The example was adapted from here with some corrections in the update step.