- Introduction
- Features
- Required Hardware
- Camera Calibration
- Camera Ordering
- Installing the OpenPose 3-D Reconstruction Module
- Quick Start
- Expected Visual Results
- Using a Different Camera Brand
- Known Bug
This experimental module performs 3-D keypoint (body, face, and hand) reconstruction and rendering for 1 person. We will not keep updating it nor solving questions/issues about it at the moment. It requires the user to be familiar with computer vision and camera calibration, including extraction of intrinsic and extrinsic parameters.
- Auto detection of all FLIR cameras connected to your machine, and image streaming from all of them.
- Hardware trigger and buffer
NewestFirstOverwrite
modes enabled. Hence, the algorithm will always get the last synchronized frame from each camera, deleting the rest. - 3-D reconstruction of body, face, and hands for 1 person.
- If more than 1 person is detected per camera, the algorithm will just try to match person 0 on each camera, which will potentially correspond to different people in the scene. Thus, the 3-D reconstruction will completely fail.
- Only points with high threshold with respect to each one of the cameras are reprojected (and later rendered). An alternative for > 4 cameras could potentially do 3-D reprojection and render all points with good views in more than N different cameras (not implemented here).
- Only Direct linear transformation (DLT) is applied for reconstruction. Non-linear optimization methods (e.g. from Ceres Solver) will potentially improve results (not implemented).
- Basic OpenGL rendering with the
freeglut
library.
This demo assumes n arbitrary stereo cameras from the FLIR company (formerly Point Grey). Ideally any USB-3 FLIR model should work, but we have only used the following specific specifications:
- Camera details:
- Blackfly S Color 1.3 MP USB3 Vision (ON Semi PYTHON 1300)
- Model: BFS-U3-13Y3C-C
- 1280x1024 resolution and 170 FPS
- https://www.ptgrey.com/blackfly-s-13-mp-color-usb3-vision-on-semi-python1300
- Hardware trigger synchronization required. For this camera model, see
Blackfly S
section in https://www.ptgrey.com/tan/11052 or https://www.ptgrey.com/KB/11052. - (Ubuntu-only) Open your USB ports following section
Configuring USBFS
in http://www.ptgrey.com/KB/10685. - Install the Spinnaker SDK for your operating system: https://www.ptgrey.com/support/downloads.
- Fujinon 3 MP Varifocal Lens (3.8-13mm, 3.4x Zoom) for each camera.
- 4-Port PCI Express (PCIe) USB 3.0 Card Adapter with 4 dedicated channels.
- E.g., the 4 Ext Quad Bus version, PCI Express, from: https://www.amazon.com/Express-SuperSpeed-Adapter-Dedicated-Channels/dp/B00HJZEA2S/ref=sr_1_1?ie=UTF8&qid=1492197599&sr=8-1&keywords=4%2BPort%2BPCI%2BExpress%2B(PCIe)%2Bdedicated%2Bports&th=1.
- Alternative: https://www.startech.com/Cards-Adapters/USB-3.0/Cards/PCI-Express-USB-3-Card-4-Dedicated-Channels-4-Port~PEXUSB3S44V.
- USB 3.0 cable for each FLIR camera.
- From their official website: https://www.ptgrey.com/5-meter-type-a-to-micro-b-locking-usb-30-cable.
The user must manually get the intrinsic and extrinsic parameters of the FLIR cameras:
- Create a xml file for each camera named as
models/cameraParameters/flir/{camera_serial_number}.xml
. - The elements inside each xml file are the extrinsic parameters of the camera (
CameraMatrix
), the intrinsic parameters (Intrinsics
), and the distortion coefficients (Distortion
). Copy the format frommodels/cameraParameters/flir/17012332.xml.example
. For the extrinsic parameters of the camera, it allows you to set the coordinate origin (so that 3-d keypoints are distances with respect to that origin).- E.g., in order to set the camera 1 as the coordinate center, set its
CameraMatrix
as the identity matrix of size 3x4, and theCameraMatrix
of the other cameras as the camera extrinsic parameters of from those cameras with respect to the main cameraM_1_i
.
- E.g., in order to set the camera 1 as the coordinate center, set its
- The program can use any arbitrary number of cameras. Even if lots of cameras are added in
models/cameraParameters/flir/
, the program will check at runtime which FLIR cameras are detected and simply read those camera parameters. If the file corresponding to any of the cameras detected at runtime is not found, OpenPose will return an error. - In the example XML, OpenPose uses the 8-distortion-parameter version of OpenCV. The distortion parameters are internally used by the OpenCV function undistort() to rectify the images. Therefore, this function can take either 4-, 5- or 8-parameter distortion coefficients (OpenCV 3.X also adds a 12- and 14-parameter alternatives). Therefore, either version (4, 5, 8, 12 or 14) will work in 3D OpenPose.
In order to verify that the camera parameters introduced by the user are sorted in the same way that OpenPose reads the cameras, make sure of the following points:
- Initially, introduce the camera parameters sorted by serial number. By default (in Spinnaker 1.8), they are sorted by serial number.
- When the program is run, OpenPose displays the camera serial number associated to each index of each detected camera. If the number of cameras detected is different to the number of actual cameras, make sure the hardware is properly connected and the camera leds are on.
- Make sure that the order in which you introduced your camera parameters matches this index ordering displayed by OpenPose. Again, it should be sorted by serial number, but different Spinnaker versions might work differently.
Check the doc/installation.md#3d-reconstruction-module for installation steps.
Check the doc/quick_start.md#3-d-reconstruction for basic examples.
The visual GUI should show 3 screens.
- The Windows command line or Ubuntu bash terminal.
- The different cameras 2-D keypoint estimations.
- The final 3-D reconstruction.
It should be similar to the following image.
You can copy and modify the OpenPose 3-D demo to use any camera brand by:
- You can optionally turn off the
WITH_FLIR_CAMERA
while compiling CMake. - Copy any of the
examples/tutorial_wrapper/*.cpp
examples (we recommend2_user_synchronous.cpp
). - Modify
WUserInput
and add your custom code there. Your code should fillDatum::name
,Datum::cameraMatrix
,Datum::cvInputData
, andDatum::cvOutputData
(fill cvOutputData = cvInputData). - Remove
WUserPostProcessing
andWUserOutput
(unless you want to have your custom post-processing and/or output).
Note that your custom code should retrieve synchronized images from your cameras or any other source, as well as their intrinsic and extrinsic camera parameters.
FreeGLUT is a quite light library. Due to that, there is a known bug in the 3D module:
- The window must be closed with the Esc key. Clicking the close button will cause a core dumped or std::exception error in OpenPose. Reason: There is no way to control the behaviour of the exit button in a FreeGLUT program. Feel free to let us know or create a pull request if you find a workaround applicable to 3-D OpenPose. Another alternative is ussing
--disable_multi_thread
in OpenPose. This would avoid the issue but slow down the program, especially in multi-GPU systems.