You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, Currently we are using the hl2ss package to stream the HoloLens 2 sensor data for one of our applications. We process the image stream to detect objects in the environment. We would like to display the detected objects using a AR bounding box in the HoloLens user view. For this we want to be able to transform the detected object image coordinates to the 3d world coordinates. Is there any relevant function that you have developed, as part of the package, that can handle this? If not, could you give any suggestions as to how this transformation can be achieved or what parameters would be of relevance? Thanks!
The text was updated successfully, but these errors were encountered:
Hello,
Try getting the depth value for the image points (for example, as in sample_pv_depth_lt.py), then use the PV intrinsics, extrinsics, and pose to convert the image points (u,v) and depth to world space:
[x, y, z] = depth[v, u] * [u, v, 1] @ color_intrinsics[:3,:3]^(-1)
[x, y, z, 1]_world = [x, y, z, 1] @ color_extrinsics^(-1) @ pv_pose
Hello, Currently we are using the hl2ss package to stream the HoloLens 2 sensor data for one of our applications. We process the image stream to detect objects in the environment. We would like to display the detected objects using a AR bounding box in the HoloLens user view. For this we want to be able to transform the detected object image coordinates to the 3d world coordinates. Is there any relevant function that you have developed, as part of the package, that can handle this? If not, could you give any suggestions as to how this transformation can be achieved or what parameters would be of relevance? Thanks!
The text was updated successfully, but these errors were encountered: