KFCore is a lightweight and efficient Kalman Filter library implemented in C, C++, and MATLAB. Designed for both embedded systems and research applications, KFCore offers numerically stable algorithms with minimal dependencies and low memory usage. By leveraging advanced formulations and optimized computations, KFCore provides a robust solution for state estimation in various projects.
Are you calculating the inverse of a matrix in your Kalman filter code?
Or does your Kalman filter implementation handle numerical problems with a measurement sensitivity matrix UDU
filter in this library.
You could use the Joseph Form
to mitigate the numerical issues of the standard/vanilla Kalman equations, but this formulation requires more matrix operations and is thus not as fast and potentially still numerically inferior.
-
High Numerical Stability
- Implements the UDU (Bierman/Thornton) algorithms for superior numerical stability compared to the standard Kalman Filter formulations (2).
- Includes the Takasu formulation, a fast and efficient implementation if you don't want a square root formulation such as the
UDU
filter.
-
Focus on Embedded Targets
- Uses only static memory allocation, ensuring guaranteed runtime and memory usage suitable for resource-constrained environments.
- No dynamic memory allocation, making it ideal for embedded systems.
-
No External Dependencies
- Written in plain C code with no external code dependencies.
- Easy integration into any project without the need for additional libraries.
-
Mathematical Optimizations
- Leverages the symmetry and positive semi-definiteness of covariance matrices.
- Taking advantage of triangular shaped matrices.
-
Robust
- Added functionality to detect measurement errors and reject them with a Χ² statistical test.
- Option to reduce the influence of potential outliers based on the Mahalanobis distance (Chang, 2014).
-
Optional: Optimized Computations with BLAS Interface
- Utilizes a BLAS interface to take advantage of optimized BLAS libraries on the target platform.
- By default provides a small built-in miniblas library for platforms without an available BLAS library.
-
Optional: C++ Support with Eigen
- Offers a C++ version based on the Eigen math library.
- Features template functions for convenience in projects already using Eigen.
Feature | UDU |
Takasu |
---|---|---|
Numerical Stability | Excellent | Good |
Speed | Fast | Very fast |
Outlier detection | ✅ | ✅ |
MATLAB implementation available | ✅ | ✅ |
C implementation available | ✅ | ✅ |
C++ implementation available | ❌ | ✅ |
Python implementation available | ❌ | ✅ |
No measurement preprocessing req. | ❌ | ✅ |
Covariance matrix directly available | ❌ | ✅ |
The UDU
formulation offers superior numerical stability but requires an UDU
decomposition of the covariance matrix (hence the name) and additionally a
decorrelation of the measurements before they processing. This is not
the case for the Takasu
formulation which is basically an efficient
implementation of the vanilla Kalman filter equations.
If speed is the top priority and an optimized LAPACK/BLAS implementation is available, consider using the Takasu formulation. However, it may struggle with stability in scenarios involving:
- A large number of state variables
- A large number of measurements
- Very high precision measurements
- Poorly conditioned measurement sensitivity matrices
$\mathbf{H}$
Average Run Time Test | UDU C |
Takasu C |
Takasu C++ |
---|---|---|---|
Intel i5-13600KF Desktop CPU - Kalman Update Routine | 2.95 µS | 1.97 µS | 12.06 µS |
STM32F429 180 MHz Embedded CPU (ARM Cortex M4) - Kalman Update Routine | 103 µS | 135 µS | N/A |
STM32F429 180 MHz Embedded CPU (ARM Cortex M4) - Kalman Prediction Routine | 593 µS | 393 µS | N/A |
(based on commit 2b35963
)
All benchmarks with -O2 optimization. 15 elements state vector in benchmarks, 15x15 covariance matrix, 3x1 measurement vectors. C++ implementation not tested on the embedded platform.
Clone the repository:
git clone https://github.com/jnz/KFCore.git
- Copy
linalg.c
andlinalg.h
from thec/
directory to your project. - Copy
miniblas.c
andminiblas.h
from thec/
directory to your project. - If your platform has an optimized BLAS library that you want to use, you can exclude miniblas.
How to add the KFCore Takasu formulation to your project
- Add the files
kalman_takasu.c
andkalman_takasu.h
from thec/
directory to your project. - Include the header file in your code:
#include "kalman_takasu.h"
How to add the KFCore UDU formulation to your project
- Add the files
kalman_udu.c
andkalman_udu.h
from thec/
directory to your project. - Include the header file in your code:
#include "kalman_udu.h"
- Just copy the header file
kalman_takasu_eigen.h
to your project. The filekalman_takasu_eigen.cpp
is only required if you want to use the non-templated function version.
#include "kalman_takasu_eigen.h" // For C++ projects
-
Add to MATLAB Path
-
Add the
matlab
directory to your project or just copy the.m
files:addpath('kfcore/matlab');
-
-
Using the Functions
- Utilize the provided functions for your state estimation tasks.
For a 4x1 state vector and a 3x1 measurement vector an example setup is shown below. Two important things to highlight:
- The column-major format is used (LAPACK/BLAS default)
- The matrix
$\mathbf{H}$ is supplied in a transposed way (this makes the implementation more efficient)
float x[4] = { 1, 1, 1, 1 }; // 4x1 State vector
float P[4*4] = { 0.04f, 0, 0, 0, 0, 0.04f, 0, 0, 0, 0, 0.04f, 0, 0, 0, 0, 0.04f }; // 4x4 Covariance matrix of state vector
float R[3*3] = { 0.25f, 0, 0, 0, 0.25f, 0, 0, 0, 0.25f }; // Covariance matrix of measurement
float dz[3] = { 0.2688f, 0.9169f, -1.1294f }; // 3x1 Measurement residuals
float Ht[4*3] = { 8, 1, 6, 1, 3, 5, 7, 2, 4, 9, 2, 3 }; // Transposed design matrix / measurement sensitivity matrix, such that
int result = kalman_takasu(x, P, dz, R, Ht, 4, 3, 0.0f, NULL); // Call to update routine
First a 4x1 state vector x
and an initial 4x4 covariance matrix P
of the
state vector is needed, as an example:
float x[4] = { 1.0f, 0.0f, 0.0f, 0.0f };
float P[4 * 4] = { 0.5f, 0, 0, 0, 0, 0.5f, 0, 0, 0, 0, 0.5f, 0, 0, 0, 0, 0.5f };
As the UDU filter can be called a square root filter, the matrix P
needs to
be decomposed first:
float U[4 * 4]; // Upper triangular matrix
float d[4]; // 4x1 diagonal vector
The udu
function will perform the decomposition of P
into U
and d
:
udu(P, U, d, 4);
d
is a vector that describes the diagonal matrix of the UDU decomposition.
After this, the P
matrix is no longer required.
Then let's assume a measurement vector z
with a covariance matrix R
that
describes the uncertainty of the measurements. Note that R
is not purely a
diagonal matrix so it describes correlations between the measurements in z
const float z[3] = { 16.2688f, 17.9169f, 16.8706f };
const float R[3 * 3] = { 0.25f, 0.25f, 0.0f, 0.25f, 0.5f, 0.1f, 0.0f, 0.1f, 0.5f };
Then we need a measurement sensitivity matrix H
but we store it in a
transposed form, that's why it is named Ht
for transposed. Note that the
column-major form is used in this library.
const float Ht[4 * 3] = { 8, 1, 6, 1, 3, 5, 7, 2, 4, 9, 2, 3 };
The UDU filter can only process one scalar measurement at a time, that's why we first need to decorrelate the measurements:
decorrelate(z, Ht, R, 4, 3);
After the decorrelation, the measurements have a unit variance:
float eye[3 * 3] = { 1, 0, 0,
0, 1, 0,
0, 0, 1 };
Now we can finally process the measurements in the Kalman filter update step:
kalman_udu(x, U, d, z, eye, Ht, 4, 3, 0.0f, 0);
For C++ a template function is used in this example. Note that float can be replaced by double.
const int StateDim = 15;
const int MeasDim = 3;
Matrix<float, StateDim, 1> x;
Matrix<float, StateDim, StateDim> P;
Matrix<float, MeasDim, MeasDim> R;
Matrix<float, MeasDim, StateDim> H;
Matrix<float, MeasDim, 1> z;
Matrix<float, MeasDim, 1> dz;
dz.noalias() = z - H * x; // Calculate residuals
kalman_takasu_eigen<float, StateDim, MeasDim>(x, P, dz, R, H);
For dynamic matrices there is also the function kalman_takasu_dynamic
but the template function potentially helps the compiler to generate more efficient code.
Equation | BLAS Function | Description |
---|---|---|
Kalman Gain | ||
symm() |
Symmetric Matrix Product | |
gemm() |
General Matrix Product | |
potrf() |
Cholesky Factorization | |
trsm() |
Solving Triangular Matrix | |
trsm() |
Solving Triangular Matrix | |
gemv() |
Matrix Vector Product | |
syrk() |
Symmetric Rank Update |
A priori state vector:
This project is licensed under the modified BSD-3-Clause License - see the LICENSE file for details.
- Chang, G. (2014). Robust Kalman filtering based on Mahalanobis distance as outlier judging criterion. Journal of Geodesy, 88(4), 391-401.
- Carpenter, J. Russell, and Christopher N. D’souza (2018). Navigation Filter Best Practices. No. NF1676L-29886.
- Bierman/Thornton Algorithms: For the work on numerically stable Kalman Filter implementations.
- Tomoji Takasu: Appreciation to T. Takasu (the author of RTKLIB) for the efficient formulation.
- Eigen Library: Utilized for the C++ version to offer convenient and fast matrix operations.
For questions, suggestions, or support:
- Email: [email protected]
- GitHub Issues: GitHub Issues Page