Here, you can tune different parameters for the SynPF algorithm, detailed in Robustness Evaluation of Localization Techniques for Autonomous Racing. For a detailed treatment, read the thesis here.
The following options are in cfg/pf2_params.yaml
. They will affect the performance of PF2.
max_particles
: The number of particles in PF. Do not increase to 4000, as something really wacky goes on with CPU usage past that point. Tested 2000-3000 and it was safe.- Increasing the number of particles should improve the smoothness and accuracy of the pose estimate as there are more proposals.
max_range
: The maximum range to trust lidar scans. If encountering black tubes, then perhaps it is wise to tune this value lower. Else, you can set this as high as 25m.
rangelibc
is the accelerator for localization that we inherit from MIT PF. It allows us to evaluate the sensor model very quickly. For best performance you can choose between three pairs of options of range_method
, rangelib_variant
.
glt, 3
: Giant Lookup Table, fastest on CPU. Check memory usage inhtop
if the map size is particularly large.pcddt, 3
: Pruned Compressed Directional Distance Transform. Read MIT PF docs for more details. A slower but low-memory alternative toglt
.rmgpu, 2
: Ray-Marching GPU. fastest overall. Use if a GPU unit is available!
-
z_hit
: Probability we hit intended target. -
z_short
: Unexpected short reading. This could be turned up for H2H mode. -
z_max
: out-of-range reading beyondmax_range
. This should be turned up for black tubes. -
z_rand
: a reading anywhere in the valid range. Note that these values should sum to 1. In practice since the Lidar is quite accurate, we can setz_hit
quite high, andz_max
,z_rand
relatively low. Perhaps these could be tuned for TT vs H2H as H2H would give a higher chance of an unexpected short reading. -
sigma_hit
: standard deviation (m) of hitting intended target. This should be relatively small, as the Lidar's accuracy is high. -
lambda_short
: parameter of short-reading exponential distribution. Increase to make short readings more likely.
There are basically two motion models of interest, and can be set with the motion_model
parameter. Both are ways to get around the Ackermann-esque motion of the car (plus some side-slip).
These are the equations that govern the size of the noise added.
By adding Gaussian noise with variances determined above to the rotaitons and translations, we get a distribution of possible motions.
-
alpha_1
: How rotation affects rotation variance.- Recommended to keep between (0.0, 1.0)
-
alpha_2
: How translation affects rotation variance.- Recommended to keep between (0.0, 0.05)
- Coupled with
lam_thresh
: change these values in unison.
-
alpha_3
: How translation affects translation variance- Recommended to keep between (0.0, 5.0)
- If facing localization problems at the end of the straight, think about increasing this.
- It will make particles spread out further longitudinally.
-
alpha_4
: How rotation affects translation variance- Recommended to keep between (0.0, 1.0)
-
lam_thresh
: Minimum translation between frames for the TUM model to become effective.- If this is set lower, then rotational variance will become higher, see the equation for
$\sigma_{r1}$ above. - Recommended to keep between(0.01, 0.2)
- Generally calculating
$a_2/\lambda_t=0.2$ is a safe place to start.
- If this is set lower, then rotational variance will become higher, see the equation for
This is an assumption that the car moves in an arc, following the Kinematic Bicycle Model.
CAUTION: The
arc
motion model is quite experimental, and not included in the paper. Use at your own risk.
To compute the values of
First find the center of rotation. Because we assume we travel in an arc, we can find the radius of the arc.
Then find the coordinates of the center of the circle. This depends on the sign of
Then we can traverse points around
Gaussian noise is added to the values
The variance
The variance
This function
Lastly, the variance
These values correspond to these parameters:
-
motion_dispersion_arc_x
-$k_x$ how change in x affects x noise -
motion_dispersion_arc_y
-$k_y$ how change in y affects y noise -
motion_dispersion_arc_theta
-$k_\theta$ how change in theta affects theta noise -
motion_dispersion_arc_xy
-$k_xy$ how change in x affects y noise -
motion_dispersion_arc_x_min
-$\sigma_{x,min}$ min noise in x -
motion_dispersion_arc_y_min
-$\sigma_{y,min}$ min noise in y -
motion_dispersion_arc_y_max
-$\sigma_{y,max}$ max noise in y -
motion_dispersion_arc_theta_min
-$\sigma_{\theta,min}$ min noise in theta -
motion_dispersion_arc_xy_min_x
-$\Delta_{xy,min}$ min delta_x before it affects y scaling
The default values in pf2_params
are reasonable.
The following settings were tested on an i7-1165G7 @ 2.80GHz.
Test Case | Range Method | Rangelib Variant | Timing | Comment |
---|---|---|---|---|
1 | PCDDT | 4 CDDT Optimized | 15-25 iter/s | Most promising |
2 | PCDDT | 3 One Shot | 13-21 iter/s | |
3 | PCDDT | 2 | 11-24 iter/s | Default |
4 | PCDDT | 1 | 11-17 iter/s | |
5 | PCDDT | 0 | 15-25 iter/s | Not working |
6 | RM | 2 | 11-14 iter/s | |
7 | CDDT | 2 | 11-17 iter/s | |
8 | GLT | 2 | 22-33 iter/s | Memory intensive |
If you are in the race_stack
docker environment, this should already be set up by default. If not, follow the commands used for docker setup. Note that we do not use the original RangeLibc
library but rather a fork of it which has updated install instructions for Python 3.
If you found our work helpful in your research, we would appreciate if you cite it as follows:
@misc{lim2024robustness,
title={Robustness Evaluation of Localization Techniques for Autonomous Racing},
author={Tian Yi Lim and Edoardo Ghignone and Nicolas Baumann and Michele Magno},
year={2024},
eprint={2401.07658},
}
and if the full race stack, available here, was also helpful, please cite it as:
@misc{baumann2024forzaeth,
title={ForzaETH Race Stack - Scaled Autonomous Head-to-Head Racing on Fully Commercial off-the-Shelf Hardware},
author={Nicolas Baumann and Edoardo Ghignone and Jonas Kühne and Niklas Bastuck and Jonathan Becker and Nadine Imholz and Tobias Kränzlin and Tian Yi Lim and Michael Lötscher and Luca Schwarzenbach and Luca Tognoni and Christian Vogt and Andrea Carron and Michele Magno},
year={2024},
eprint={2403.11784}
}
This work was inspired by the MIT RACECAR project, available here.