Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiring About Integration Method for NVIDIA ISAAC ROS's isaac_ros_visual_slam with rapyuta-robotics/rclUE #132

Open
YuminosukeSato opened this issue Jan 28, 2024 · 8 comments

Comments

@YuminosukeSato
Copy link

Overview

  • I am interested in integrating the NVIDIA ISAAC ROS's isaac_ros_visual_slam component into the rclUE package, which connects Unreal Engine 5 with ROS2.
  • isaac_ros_visual_slam provides SLAM (Simultaneous Localization and Mapping) capabilities, and I am particularly interested in its application within the real-time 3D environment of Unreal Engine.

Objective

  • My goal is to implement SLAM functionalities within the Unreal Engine simulation environment to achieve advanced robot navigation and environmental perception.
  • By integrating isaac_ros_visual_slam, I aim to seamlessly combine robot operations in Unreal Engine with ROS2 functionalities.

Questions

  1. What is the best approach to integrate isaac_ros_visual_slam into rclUE?
  2. Are there any specific dependencies or configurations that I should be aware of?
  3. How might isaac_ros_visual_slam impact data exchange between Unreal Engine and ROS2?

Additional Information

  • Please provide details of your current development environment, including the versions of Unreal Engine 5 and ROS2 you are using.
  • Insights from other users or developers who have experience with isaac_ros_visual_slam would be highly appreciated.
@yuokamoto
Copy link
Contributor

I'm not very familliar with isaac_ros_visual_slam but

it seems isaac_ros_visual_slam uses stereo camera and IMU. Since you can use camera component to publish image topic https://rapyutasimulationplugins.readthedocs.io/en/devel/doxygen_generated/html/d9/d91/class_u_r_r_r_o_s2_camera_component.html
Currently RapyutaSimulationPlugins doesn;t have IMU sensor, we need to add it.

rclUE/RapyutaSImulationPlugins are mainly tested in Ubuntu20.04/22.04 and ROS 2 foxy/humble

@YuminosukeSato
Copy link
Author

Is it possible to integrate not only isaac_ros_visual_slam but also vision's slam?
I want to autopilot turtlebot from images.
I want to use slam from a monocular or depth image in UE to perform auto navigation with nav2, but I don't know how.

@yuokamoto
Copy link
Contributor

You can use depth camera by setting URRROS2CameraComponent::CameraType = DEPTH
CameraType = EROS2CameraType::RGB

@YuminosukeSato
Copy link
Author

I want to use Rtabmap with a depth camera.

  • How should I configure the depth camera settings on the Unreal Engine side?
  • How can I pass this depth camera footage to Rtabmap?
  • I would like to perform this tutorial. How should I proceed?

@yuokamoto
Copy link
Contributor

  • you can attach URRROS2CameraComponent and change type to CameraType = DEPTH
  • since camera image is published as rostopic, you can change topic name to make it same as Rtabmap setting
  • you need to 1) specify necessary ROS interface for Rtabmap, 2) impl/config UE to have that.

@YuminosukeSato
Copy link
Author

YuminosukeSato commented Feb 4, 2024

Would it be advisable to set this up on the Unreal Engine side?
Screenshot from 2024-02-04 12-21-46
Encoding: RGB ->depth

@YuminosukeSato
Copy link
Author

ros2 launch rtabmap_demos turtlebot3_scan.launch.py
Is the rtabmap running on the depth camera?

@yuokamoto
Copy link
Contributor

Here is camera type setting
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants