Find the transformation between the robot (hand) and camera (eye).
In this repo, we show that 6-8 pairs of color-registered pointclouds* (with calibration board) and robot_base-to-ee poses are enough to get a good result. This assumes that the end-effector to board pose is not known (if known, just do one-shot calibration).
*: ASSUMES pointclouds have already been registered well with RGB (otherwise, perform intrinsic calibration procedure)
Supported Patterns:
- Asymmetric circles
- April tags
You'll __need __ to install the python lib of our open3d fork to use this repo (at least for the pointcloud reading and board detection parts. The optimization bit can be independent).
- open3d-fork (with read_point_cloud_with_nan): http://10.0.9.33:8888/vision/open3d-fork
- numpy
- opencv
- apriltag
- json
- transforms3d
pip install numpy opencv-python apriltag json transforms3d
Asymmetric circles: python main_asymm_circle.py
April Tags (make sure there is only one tag in the view! Multi tag is not supported): python main_april_tag.py