Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoordinateTransformHelper produces bad curvy results compared to OrbbecViewer #19

Open
Suedocode opened this issue Nov 20, 2024 · 4 comments

Comments

@Suedocode
Copy link
Collaborator

Suedocode commented Nov 20, 2024

I am having trouble getting the same 3d cloud results from OrbbecViewer using the CoordinateTransformHelper (CTH).

To compute the 3d points, we use this:

static constexpr uint32_t s_dw = 640;
static constexpr uint32_t s_dh = 576;

OBPoint2f p2{depth_frame_x, depth_frame_y}; //iterate through all depth frame pixels to compute 3d cloud
float d = float(depth_data[depth_frame_y*s_dw + depth_frame_x]); //convert depth value in frame from uint16_t to float

auto depth_intr = depth_stream_profile->getIntrinsic();
auto depth_ext = depth_stream_profile->getExtrinsicTo(depth_stream_profile);
OBPoint3f p3{};
CoordinateTransformHelper::transformation2dto3d(p2, d, depth_intr, depth_ext, &p3);

pcl::PointXYZRGB p;
p.x = p3.x*.001;
p.y = p3.y*.001;
p.z = p3.z*.001;
add_point_to_cloud(p); //skipping details here

It results in a curvy ground result.
curvy_cloud

Meanwhile, here's the OrbbecViewer results. The ground is flat and consistent.
flat

I need guidance on how to reproduce the clean OrbbecViewer clouds using the CTH functions. I'm guessing this has something to do with the fact that I never utilize the distortion parameters, but the CTH function for computing 3d points doesn't take it as a parameter. There also don't appear to be any CTH functions that can undistort a depth pixel's value.

Environment:
Ubuntu 22.04.5 LTS (x86_64)
g++ 11.4.0
OrbbecSDK-dev bd2d4dd
Femto: Firmware v1.2.9

@hzcyf
Copy link
Contributor

hzcyf commented Nov 21, 2024

For OrbbecViewer, we utilize the ob::PointCloudFilter filter to convert depth data into point clouds, which includes distortion correction. Therefore, it is also recommended that you use the ob::PointCloudFilter filter.

If your application requires a transformation similar to CoordinateTransformHelper::transformation2dto3d for depth to point cloud conversion, we currently do not offer a version with built-in distortion correction. This functionality exists within the SDK's internal code, but we have not yet completed the API wrapping and testing. We will provide this feature as soon as possible and notify you upon its release.

@Suedocode
Copy link
Collaborator Author

Suedocode commented Nov 21, 2024

Where does ob::PointCloudFilter get the intrinsics from if I'm loading from disk? We are loading the frame data and intrinsics from disk in order to compute a cloud. I grab the intrinsics from the pipeline into a structure I can save/load to disk.

struct transformer_t {
  struct geometry_t { 
    OBCameraIntrinsic intr;
    OBCameraDistortion dist;
    OBExtrinsic ext[2];
  };
  geometry_t color, depth;
  
  void initialize(std::shared_ptr<Pipeline> pipe) {
    auto cp = pipe->getStreamProfileList(OB_SENSOR_COLOR)->getVideoStreamProfile(s_cw, s_ch, OB_FORMAT_BGRA, s_fps);
    auto dp = pipe->getStreamProfileList(OB_SENSOR_DEPTH)->getVideoStreamProfile(s_dw, s_dh, OB_FORMAT_Y16,  s_fps);
    
    color.intr = cp->getIntrinsic();
    color.dist = cp->getDistortion();
    color.ext[0] = cp->getExtrinsicTo(cp);
    color.ext[1] = cp->getExtrinsicTo(dp);
    
    depth.intr = dp->getIntrinsic();
    depth.dist = dp->getDistortion();
    depth.ext[0] = dp->getExtrinsicTo(dp);
    depth.ext[1] = dp->getExtrinsicTo(cp);
  }
  //...
}

The CTH function can use these intrinsics to compute what I need (incorrectly without distortion), but I don't see a way to load the intrinsics into ob::PointCloudFilter.

I load frames from disk like this:

std::shared_ptr<ob::VideoFrame> frame;
if(GetType() == Type::COLOR) {
  frame = ob::FrameFactory::createVideoFrame(OB_FRAME_COLOR, OB_FORMAT_BGRA, s_cw, s_ch, 0);
  cv::Mat mat = cv::imread(path, cv::IMREAD_UNCHANGED);
  memcpy(frame->data(), mat.data, mFrame->dataSize());
} else {
  frame = ob::FrameFactory::createVideoFrame(OB_FRAME_DEPTH, OB_FORMAT_Y16, s_dw, s_dh, 0);  
  std::ifstream in_file;
  in_file.open(path, std::ios::in | std::ios::binary);
  in_file.read((char*)frame->data(), frame->dataSize());
  in_file.close();
}

Relevant comment which I interpret to mean that ob::PointCloudFilter does not work in this offline cloud generation format: #8 (comment)

@MPierer
Copy link

MPierer commented Nov 21, 2024

Hi, maybe if it's urgent you can just copy the code from src/shared/utils/CoordinateUtil.cpp/hpp and use it directly.
I did this to speed up the back-projection 3D->2D (e.g. cropped/filtered point cloud back to depth image).
Calling it the CTH class for a 640x480 PCL was way to slow, trashing cache and re-fetching intrinsics/extrinsics on every call.
The same is true for the PointCloud filter (file: src/filter/publicfilters/PointCloudProcess.cpp) You could then pass your saved intrinsics.
Thanks Orbbec for open-sourcing it!

@Suedocode
Copy link
Collaborator Author

Suedocode commented Nov 21, 2024

Wait, why isn't src/shared/utils/CoordinateUtil.hpp simply moved to include/libobsensor/hpp so its interface gets exported as part of libOrbbecSDK.so? It looks like a better version of CTH.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants