Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Robocar Project Summary #1

Open
nairakhilendra opened this issue Feb 18, 2024 · 12 comments
Open

Robocar Project Summary #1

nairakhilendra opened this issue Feb 18, 2024 · 12 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@nairakhilendra
Copy link
Collaborator

nairakhilendra commented Feb 18, 2024

Due date is February 26 (~1 week)

  1. Test on RoboCar
  2. Try to integrate into ROS2 for the car Multicam OakD Lite Driver #2
  3. Follow-the-gap algorithm Ad-Hoc Follow The Gap Implementation for Visual-Based Navigation #4
    • Generally used for lidar, but it's like adhoc, you can use camera as a placeholder.
    • Use two grayscale cameras for depth and try to add depth data
    • Make ros2 node, define message, create topic, create subscriber to topic, process data

https://navigation.ros.org/setup_guides/odom/setup_odom.html

@chenyenru chenyenru added the documentation Improvements or additions to documentation label Feb 18, 2024
@chenyenru
Copy link
Collaborator

@chenyenru
Copy link
Collaborator

Our Current Workplan for the simple-line-follower

image

@chenyenru will work on the ROS2 Camera Interface and the data pipeline to feed into the midpoint finder.
@nairakhilendra will work on the midpoint finder and the controls part with VESC

@sisaha9
Copy link
Member

sisaha9 commented Feb 20, 2024

Issue with using Follow the Gap is it's such an easy and popular algorithm that the F1Tenth organizers have placed traps on the track where Follow the Gap will go in the wrong direction. For ICRA you would still have to use a localization based approach or a behavior cloning based approach

@chenyenru
Copy link
Collaborator

Thanks Sid. You saved us from going off the wrong track.

The team is currently split into two parts: (1) Behavioral Cloning with donkeycar and (2) Alternative approach with ROS2 to see if there's a better way to race.

Within "Alternative approach with ROS2", we're divided into two subparts: (1) LiDAR only and (2) Camera only. https://discord.com/channels/974049852382126150/1174597043898040391/1206820066214150174.

@nairakhilendra and I are working on camera only.

However, I currently cannot find a camera-only approach that has similar performance as the approaches with LiDAR. And I am not sure if pursuing a camera-only approach will be worth it, given we want to reach the level of performance to compete in ICRA F1Tenth in May.

Given these, what's your recommendation on where to start with the non-donkey approach for the ICRA F1Tenth indoor race?

Thanks!

@sisaha9
Copy link
Member

sisaha9 commented Feb 20, 2024

Personally for our team the challenge has always been localizing at speed. Since it's just a 2 car race the planning is actually pretty simple where you could choose b/w 2-3 lines and have the car drive towards those areas. For controls the winning team has mostly just used pure pursuit and tuned it. Not saying these areas can't be improved. But without localization, you would waste your time on them since they would need to be constantly retuned and it's hard to get it to work without localization being accurate

So a Lidar / camera / Lidar + camera localization for speed is off importance. The Lidars we have at UCSD are traditionally b/w 10-20 Hz while the UPenn teams come with 40Hz lidars which are super important for high speed localization to reduce the effect of distortion. For that Autoware traditionally has used https://github.com/SteveMacenski/slam_toolbox. But the F1Tenth community has used that to make maps and then use https://github.com/f1tenth/particle_filter to localize. You can follow along the Autoware Racing WG Meeting notes here: https://github.com/orgs/autowarefoundation/discussions?discussions_q=label%3Ameeting%3Aracing-wg. Again, I think this is a good area to research. But we are limited by the Lidar frequency unless additional work is done to integrate the IMU for short term position and distortion correction. Also since you do have the OAK-D depth as a substitute for Lidar at higher frequency it would be interesting to see if the localization can be done on that

The camera is another interesting area. While there is not much work done in the F1Tenth community there is tons of research in drone racing on using Camera + IMU solutions for high speed drone racing. I don't have any papers / repos I am actively following for them but I would look at drone solutions to start with and see

@chenyenru
Copy link
Collaborator

chenyenru commented Feb 21, 2024

Hi Sid, thank you for pointing us to these sources!

I have summarized your points, could you confirm if I understood them correctly?

  • What is less important: Planning (can be improved tho)

    • The team's primary challenge is localizing at speed in a two-car race scenario.
    • Planning is simplified with a choice between 2-3 lines for the car to follow.
    • We don't need complex planning.
      • Pure pursuit controls are commonly used by winning teams but can be improved.
  • What is more important: Accurate Localization

    • Accurate localization is essential to avoid constant retuning and optimize performance.
      Lidar, camera, or combined localization methods are critical for speed.
    • UCSD typically uses Lidars with frequencies of 10-20 Hz, while UPenn teams employ 40Hz Lidars for reduced distortion at high speeds.
  • Examples to look to for localization

  • Ideas for doing accurate localization with our constraints on LiDAR

    • Integrate IMU data for short-term position tracking and distortion correction could enhance localization.
    • Despite limited exploration in the F1Tenth community, cameras show potential for localization.
      • Exploring how OAK-D depth camera's depth capability could help
      • Research in drone racing suggests using Camera + IMU solutions as a promising starting point for further investigation.
      • Look into drone racing research for potential Camera + IMU solutions

@sisaha9
Copy link
Member

sisaha9 commented Feb 21, 2024

Looks good

@sisaha9
Copy link
Member

sisaha9 commented Feb 22, 2024

I did actually omit perception but that is also an important challenge. Not sure what are the approaches currently being pursued for that on the AW end. We use their Euclidean Clustering in IAC but that is 3D Lidar specific

@chenyenru
Copy link
Collaborator

Thank you Sid.

I'll look into Autoware in a bit.

Yeah, Euclidean Clustering might be for more high-frequency 3D LiDAR. For research, I'll generally look into paper with one of the following keyword: "low latency LiDAR", "visual SLAM", "RGB-D SLAM", "Visual SLAM", "Visual Odometry."

This information piece from NVIDIA looks like a good starter.

F1Tenth listed publications related to F1Tenth Car: Link

  • Kinematics-Based Trajectory Tracking

    • This one looks promising in terms of reducing computing power and tracking the race track?
    • I'll have to look more into it
  • BevFusion

    • This one has more state-of-the-art multi-sensor fusion for Autonomous Cars. I think this is less related to F1Tenth, but it has some camera-only baseline. Some parts of it might inspire us in some advanced implementations.

@sisaha9
Copy link
Member

sisaha9 commented Feb 29, 2024

A recently released paper that might be helpful: https://arxiv.org/abs/2402.18558

@chenyenru
Copy link
Collaborator

Thank you so much Sid can't believe there's a paper published discussing the thing we want to know. We'll read into it and see what it suggests!

@nairakhilendra
Copy link
Collaborator Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants