A python wrapper around M3T tracker from DLR-RM/3DObjectTracking.
To install pym3t, you can use pip or poetry.
We strongly suggest to install it in either a venv or a conda environment.
git clone https://github.com/agimus-project/pym3t
cd pym3t
conda env create -f environment.yml
conda activate pym3t
pip install .
Note
M3T relies on GLFW. Before building ensure it is installed.
For Ubuntu run apt-get install libglfw3 libglfw3-dev
git clone https://github.com/agimus-project/pym3t
cd pym3t
python -m venv .venv
source .venv/bin/activate
pip install .
As example usage of the library, we provide several scripts:
run_image_dir_example.py
: single object tracking using color and depth images from filesystem;run_webcam_example.py
: single object tracking with first camera device detected by the system (webcam or other usb camera usually);run_realsense_example.py
: single object tracking with realsense camera.
Important
For all examples, you need a object mesh in the Wavefront .obj format with name <object_id>.obj. Upon first execution, a set of sparse template views are generated which can take some time.
Tip
Check available options with python <script name>.py -h
To run this example you need a set of of recorded sequential color (and potentially depth) images stored in a directory. The color images color*.png and depth*.png need to have names with lexicographic order (e.g. color_000000.png, color_000001.png, color_000002.png, ...) Calibrated camera intrinsics in the formate described in config/cam_d435_640.yaml also need to be provided.
Color only:
python examples/run_image_dir_example.py --use_region -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
Color + depth:
python examples/run_image_dir_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir> -i <path/to/image/dir> -c config/cam_d435_640.yaml --stop
Keyboard commands:
q
: exit;any other key
: When running with --stop or -s argument, continue to next image.
To bypass camera calibration, a reasonable horizontal fov (50 - 70 degrees) can be assumed to get camera intrinsics
python examples/run_webcam_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
Keyboard commands:
q
: exit;d
: reset object pose to initial guess;x
: start/restart tracking.
Color only:
python examples/run_realsense_example.py --use_region -b obj_000014 -m <path/to/obj/dir>
Color + depth:
python examples/run_realsense_example.py --use_region --use_depth -b obj_000014 -m <path/to/obj/dir>
Keyboard commands:
q
: exit;d
: initialize object pose;x
: start/restart tracking.