Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are reasonable selections of lag_duration for state estimation? #262

Open
SteveMacenski opened this issue Dec 9, 2021 · 1 comment

Comments

@SteveMacenski
Copy link

SteveMacenski commented Dec 9, 2021

Hi,

I'm watching your ROSCon talk and had a question on the lag_duration for the fixed lag smoother. It makes sense to keep a rolling buffer of measurements to bound the problem, but it doesn't tell us what a "reasonable" set of values for this are.

Your example shows 0.5 seconds, but if someone asked me in a vacuum, I would have thought something closer to 1-2 seconds. There's clearly a trade off for how often you cull out measurements to bound the graph size and how long you keep it for accuracy, so I was hoping you could shed some light from your experience what a reasonable range of values are for a usual setup (IMU + wheel ecoders, each publishing 50-100hz) from tuning your systems. This doesn't have to be very scientific, I'm really looking to query your intuition from building this project, though if you had actual numbers, that's good too 😆

(e.g. is 5s just insane because of the number of constraints in the graph for a typical robot CPU, or is 0.5s an actual recommended value or just a toy example for the talk, where's the rough point of diminishing returns on accuracy vs buffer length, etc)

A reasonable follow up question would be if there's a better trade off on lowering the frequency of optimization to allow more samples to come in during a time period and then increasing the lag_duration to make use of the extra compute time for improved accuracy. I wonder about the accuracy this can obtain if being run at 20hz with usual robot odometry / IMU at 50-100hz with respect to long term drift relative to an EKF.


tl;dr

  • What is a reasonable range of values for storing values / optimizing the graph for state estimation
  • Relative to traditional filters, should we prefer to store values longer and update the filter slower to gather more information per run to have decent performance?
@svwilliams
Copy link
Contributor

Smoother lag window size

The answer is, of course, "it depends on the situation".
But I can at least talk about some general situations that may provide some guidance.

The robot_localization package implemented an EKF that maintained a single robot state. The fuse_models implementations are fairly faithful reproductions of those same constraints. This means each of those sensor constraints only involve a single variable. This leads to a very uninteresting graph:
r_l-graph
Each state is connected to the previous via a motion model, and each sensor measurement involves a single state.

In such a graph, I would expect the fixed-lag smoother to produce a nearly identical solution to the EKF regardless of the lag duration. If you only care about the value of the most recent state, none of constraints reach back in time, and none of the constraint cost functions are horribly nonlinear, then the lag window provides very little benefit. Setting the lag window to zero (if that's even allowed) would be a rational thing to do.

Then the next obvious question is "Under what circumstances should a non-zero lag be used?"

Out-of-order measurements

Even though the robot_localization EKF only maintains a single state, it is able to handle out-of-order measurements (e.g. your visual odometry measurement for t=1.0s arrives after the IMU measurement for t=1.2s). This is accomplishes by maintaining a log of the past N measurements and states. If an older measurement arrives, the EKF rolls back to a previous state and then replays the cached measurements until it is back at the current time.

No such system has been implemented for the fixed-lag smoother. The fixed-lag smoother relies on the lag window to solve this problem. If an old measurement arrives that exists within the lag window, the fixed-lag smoother just inserts it into the correct place in the graph. The next optimization cycle will incorporate that measurement as if it arrived exactly on time. This means the lag window needs to be long enough to handle the expected sensor latency.

Constraints involving past variables

A fairly obvious next step is to represent incremental odometry sensors (wheel encoders, visual odometry, IMU strapdown, etc) as "relative" measurements. Something with the flavor "X3 - X1 = 1.2m". That graph is a bit more interesting:
relative_odom-graph
With the graph looping back to past states, you need to have access to the past states just to include the measurement. Setting the lag to something around the longest update period of the "relative" sensors is a good starting place.

Nonlinear cost functions

Certain cost functions are highly nonlinear. A classic example is observations of visual landmarks with a monocular camera. Small changes the state estimate can have large impacts on the estimate of the landmark and the robot's pose.
camera-graph
If you can afford it, increasing the lag to include the longest expected visual feature track is ideal from a quality standpoint. But (a) it's hard to predict what the longest feature track will be, and (b) the computation can get expensive. This answer here will likely require some experimentation and tuning.

And visual landmarks are not the only nonlinear cost functions possible. Long ago I implemented an E-M step inside a measurement constraint to estimate multi-path effects. That can get exciting as the E-M step switches between discrete modes depending on new measurements.

Publisher latency

If estimation quality is very important and your system can tolerate higher latency, you could create a publisher the extracts the state estimate from the oldest state in the lag window instead of the newest state in the lag window. While the fixed-lag smoother and EKF may generate similar solutions for the most recent state, the fixed-lag smoother will continue to improve the estimate of the older states as new measurements arrive. The state output from within the lag window will tend to be smoother and have a smaller uncertainty. In this case, the longer the lag window, the better. Though there will be diminishing returns somewhere along the way. This may not be useful for the localization/state estimation problem, but there may exist some application where that is useful.

Smoother update rate

I have fewer useful things to say about the update rate. Decreasing the update rate will generally result in:

  • Longer latency for the published result
  • More time spent per optimization, as there are more variables to optimize each time
  • Lower total CPU time used. Even though each optimization may consume more time, this is (generally) more than offset by having fewer cycles.
    My general advice is to determine what the update rate needs to be based on the dynamics of the system. A slow-moving, statically-stable, ground-based robot can run with a slower update rate than a quadcopter doing backflips. And I have no idea what the considerations are for an autonomous car navigating at highway speed. Once you know the target update rate, you can start working out accuracy-latency tradeoffs, selecting the proper algorithms, and tuning the system to meet the goals.

tl; dr

  • If you are running a "typical" slow, ground-based robot with simple sensors running in the 50Hz - 100Hz zone, a lag duration of roughly 0.1s would be my initial guess. Something in the neighborhood of a 5 state history is a good place to start exploring.
  • Running at a slower update rate will likely reduce CPU usage, if you can afford the lower state publishing rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants