Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lidar to Camera Projection Example #4

Open
jgrnt opened this issue Aug 26, 2020 · 10 comments
Open

Lidar to Camera Projection Example #4

jgrnt opened this issue Aug 26, 2020 · 10 comments

Comments

@jgrnt
Copy link

jgrnt commented Aug 26, 2020

I currently try to use the lidar information together with the cameras, but I got stuck figuring out the right transformations.

@adamek727 mentioned a lidar to camera projection example in #1. To you still plan to release this.
This would be super helpful

Thank you

@adamek727
Copy link

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks.

About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544
Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain.

Please let me know if there would be any problem.

@hitdshu
Copy link

hitdshu commented May 7, 2021

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks.

About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544
Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain.

Please let me know if there would be any problem.

Hi, thanks for your excellent project. I could not figure out the intrinsic parameters of your camera calibration. Is the distortion coeffs k1/k2/p1/p2/k3 in opencv format?

Thanks,
Deshun

@adamek727
Copy link

Yes, as you mentioned. It is the common opencv format.

@hitdshu
Copy link

hitdshu commented May 10, 2021

Yes, as you mentioned. It is the common opencv format.

Thanks for your timely reply and kindness.

@qingzi02010
Copy link

qingzi02010 commented Jun 28, 2021

Hi, thanks for your interest. However, sorry for the delay, but we have some complications with a public licence for the software created at the university. I hope we should be able to release it in several weeks.
About the calibrations, pleas take look at https://github.com/Robotics-BUT/Brno-Urban-Dataset-Calibrations/tree/7d859584c7d8a0c2007285a1da04e1771b824544
Here you can find calibration parameters for each camera, and in the frames.yaml file, there are translations and rotations of all the sensors w.r.t. the IMU unit. So if you want to transform point from lidar frame to camera frame, you have to apply the inverse lidar tf and forward camera tf. Than you can use the common opencv method of projecting 3D points into the 2D plain.
Please let me know if there would be any problem.

Hi, thanks for your excellent project. I could not figure out the intrinsic parameters of your camera calibration. Is the distortion coeffs k1/k2/p1/p2/k3 in opencv format?

Thanks,
Deshun

Hello, thanks for your great work firstly! I have several questions when fusing lidar and image.
1)From frame.yaml, lidar_left: trans: [0.17, 0.48, 0.15] rot: [0.0012719, -0.0632113, -0.9977974, 0.0200754], what does the values in "rot" mean, θ , sin(θ) or something else? And what does the fourth value"w" in "rot" mean?
2) I am a beginner in this area, can you recommend some websites to fuse lidar and camera, with the parameter style provided in this project. Should I use ros package or python-opencv?

@adamek727
Copy link

Hi, thanks for your interest. You are welcome.

  1. the rotation is expressed as quaternion. It is some kind of "3D complex number". Compared to the rotation matrices and euler angels, quaternion does not have the singularity problem. To see, how it works, please see https://www.youtube.com/watch?v=zjMuIxRvygQ or play with https://quaternions.online/.
  2. As a first, I would recommend to understand, how the lidar is projected into the camera. http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf.

@qingzi02010
Copy link

Hi, thanks for your interest. You are welcome.

  1. the rotation is expressed as quaternion. It is some kind of "3D complex number". Compared to the rotation matrices and euler angels, quaternion does not have the singularity problem. To see, how it works, please see https://www.youtube.com/watch?v=zjMuIxRvygQ or play with https://quaternions.online/.
  2. As a first, I would recommend to understand, how the lidar is projected into the camera. http://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf.

Got it! Thanks for your reply and kindness!

@fferflo
Copy link

fferflo commented Feb 7, 2022

Thanks for the great work!

Do you have an update on the lidar-to-image projection script?

@adamek727
Copy link

Hi! Sorry we have currently no implementation in python, but we handled these topic and many others in related project available here. It is a C++ project, but you can at least find out, how the math is used here.
https://github.com/Robotics-BUT/Atlas-Fusion

@nilushacj
Copy link

nilushacj commented Dec 30, 2024

Hi. I tried to project the lidar_center points to camera_left_front using the code given below. However, the resulting point projection is a bit strange as you can see in the example image file attached below. From the looks of it, there seems to be a problem with the coordinate systems of the two sensors (approximately a 90 degree rotation offset). Could you kindly take a quick look and let me know what the issue might be (especially in the math of it)? I would immensely appreciate it
lidar_to_cam

def project_lidar_to_camera(pcd_points, image, extrinsic_matrix, intrinsic_matrix):
  """
  Projects LiDAR point cloud to the synchronized left-front camera (RGB) frame.

  Args:
      pcd_points (np.ndarray): Point cloud as a (N, 4) numpy array with [x, y, z, intensity].
      image (np.ndarray): RGB image as a numpy array.
      extrinsic_matrix (np.ndarray): 4x4 matrix to transform points from LiDAR to camera coordinates.
      intrinsic_matrix (np.ndarray): 3x3 camera intrinsic matrix.
      
  Returns:
      np.ndarray: Image with projected points overlaid.
      np.ndarray: 2D coordinates of valid points in the image plane.
      np.ndarray: Corresponding intensity values for valid points.
  """
  image_height = image.shape[0]
  image_width  = image.shape[1]
  # Step 1: Transform point cloud to camera coordinates using extrinsics
  points_homogeneous = np.hstack((pcd_points[:, :3], np.ones((pcd_points.shape[0], 1))))  # (N, 4)
  points_camera = (extrinsic_matrix @ points_homogeneous.T).T  # (N, 4)
  
  # Step 2: Filter out points behind the camera (z <= 0)
  valid_indices = points_camera[:, 2] > 0
  points_camera = points_camera[valid_indices]
  intensities = pcd_points[valid_indices, 3]
  
  # Step 3: Project to the image plane using intrinsics
  points_2d_homogeneous = intrinsic_matrix @ points_camera[:, :3].T  # (3, N)
  points_2d = (points_2d_homogeneous[:2] / points_2d_homogeneous[2]).T  # (N, 2)
  
  # Step 4: Filter out points outside the image bounds
  valid_indices = (
    (points_2d[:, 0] >= 0) & (points_2d[:, 0] < image_width) &
    (points_2d[:, 1] >= 0) & (points_2d[:, 1] < image_height)
  )
  points_2d = points_2d[valid_indices].astype(int)
  intensities = intensities[valid_indices]
  
  # Step 5: Overlay the points on the image
  image_with_points = image.copy()
  for (x, y), intensity in zip(points_2d, intensities):
    color = (0, 255, 0)  # Green for the overlay
    cv2.circle(image_with_points, (x, y), radius=2, color=color, thickness=-1)
  
  return image_with_points, points_2d, intensities
  
cam_conf = './brno_calibrations/camera_left_front.yaml'
with open(cam_conf, 'r') as file:
  data = yaml.safe_load(file)
  intrinsic = data['intrinsic']
    
K = np.array(intrinsic).reshape((3, 3))

# -- Get transformation --
frames_conf = './brno_calibrations/frames.yaml'
with open(frames_conf, 'r') as file:
  data = yaml.safe_load(file)
  t_cam = np.array(data['camera_left_front']['trans'], dtype=float)
  r_cam = np.array(data['camera_left_front']['rot'], dtype=float)
  t_lid = np.array(data['lidar_center']['trans'], dtype=float)
  r_lid = np.array(data['lidar_center']['rot'], dtype=float)

def create_transformation_matrix(translation, quaternion):
  """Create a 4x4 homogeneous transformation matrix."""
  rotation = R.from_quat(quaternion).as_matrix()
  T = np.eye(4)
  T[:3, :3] = rotation
  T[:3, 3] = translation
  return T

T_cam = create_transformation_matrix(t_cam, r_cam)
T_lidar = create_transformation_matrix(t_lid, r_lid)

T_lidar_to_cam = T_cam @ np.linalg.inv(T_lidar)
img, points, intensity = project_lidar_to_camera(ptcloud, rgb_img, T_lidar_to_cam, K)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants