- the Object Detector:
Takes in a sensor_msgs::PointCloud2& msg
point cloud and publish all detected object 3D bounding boxes (and their orientations) as visualization_msgs::Marker
to the specified output topic.
Detected object 3D bounding boxes will be of type visualization_msgs::Marker::CUBE
and their corresponding orientations will be of type visualization_msgs::Marker::ARROW
. Although they are both published to the /object_markers
topic, they can be easily distinguished by checking marker.type
. At the same time, bounding box & orientation of the same object will share the same marker.id
. (marker.pose
of the bounding box already has orientation info included; the arrows are mainly for visualizing & debugging purposes)
For more details, check out ObjectDetector::visualizeBoundingBox.
- the Object Detection Node:
A constantly running Node that utilizes the Object Detector class.
Takes /cloud_in
as the input point cloud topic and sets Object Detector's output topic to the /object_markers
.
- For simulation usage, see Arm Motion Planning Usage below
- For usage on the real robot, do
rosrun perception object_detection /cloud_in:=/head_camera/depth_registered/points
- the Arm Motion Planner
Initialization: arm_planner = ArmMotionPlanner(arm, gripper)
MainFunction: pick_up(self, obj_marker, obs_marker_list=None):
obj_marker is type visualization_msgs::Marker
and obs_marker_list should be a list of visualization_msgs::Marker
, but it default to None. The function would return False if the planning for grasp that object is failed, and return True on success.
- Run
roscore
- Run
roslaunch fetch_moveit_config move_group.launch
- In simulation: Run
roslaunch fetch_gazebo playground.launch
,rosrun rviz rviz
. - Run
rosrun applications publish_saved_cloud.py <bag file path or empty on real robot>
(e.g.rosrun applications publish_saved_cloud.py /home/team6/catkin_ws/src/cse481wi19/saved_clouds/real_floor_0.bag
) - Run perception
rosrun perception object_detection /cloud_in:=/mock_point_cloud
orrosrun perception object_detection /cloud_in:=/head_camera_depth
on real robot.
Run rosrun applications arm_motion_planner_demo.py
- the Navigator
API:
class Navigator()
current_pose(): get the current position of the robot in map frame.
return: current a PoseStamped of the current position with frame_id map.
goto(goal=PoseStamped, timeout=secs): goto a given target position in the given timeout time
params: a PoseStamped with a position and a header with a frame_id
a integer with the seconds of time we want to wait
return: True if it gets to the given goal in the given time
False otherwise
- Run
roslaunch applications/launch/nav_rviz.launch
(See Note) - Create Navigator()
- Call current_pose() and goto(goal=PoseStamped, timeout=secs)
nav_rviz.launch will launch the map recorded in the real world.
If you want to test on simulator, run roslaunch applications/launch/nav_rviz_smi.launch
for 1.
...
- the Facial Detection Node:
A constantly running Node that utilizes two image topics (rgb & registered_depth) plus one camera_info topic. It outputs a visualization_msgs::Marker
(in the camera frame, whose pose is the 3D location of the center of the closet detected face) to /face_marker
.
rosrun perception face_detection
...