Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computer vision for LED panel #1267

Open
cbrxyz opened this issue Sep 4, 2024 · 8 comments
Open

Computer vision for LED panel #1267

cbrxyz opened this issue Sep 4, 2024 · 8 comments

Comments

@cbrxyz
Copy link
Member

cbrxyz commented Sep 4, 2024

What needs to change?

We will need to create a computer vision model for detecting the different phases of the light tower object. Previously we attempted this with a classical model of detecting the color of the tower, although we experienced variable success with this approach to the problem.

How would this task be tested?

  1. Ensure that the model is able to detect the phases of the light tower in a variety of environmental conditions.
@Josht8601 Josht8601 self-assigned this Sep 7, 2024
@cbrxyz cbrxyz changed the title Computer vision for light tower Computer vision for LED panel Sep 10, 2024
@cbrxyz
Copy link
Member Author

cbrxyz commented Sep 10, 2024

I think a good plan for getting training data for this could include:

  • Scraping frames from the videos of the 2022/2018 RobotX finals
  • Getting data from simulations
  • Getting data from real life (via the mechanical part that has been built)

At that point, it can be added to our Label Studio!

@alexoj46
Copy link
Contributor

So far, I have used yt-dlp and ffmpeg to extract JPEGs of frames showing the light tower from the 2018 and 2022 RobotX finals competition videos on YouTube. I created a Label Studio project with these images (http://10.245.80.197:8431/projects/16) and am labeling them as containing a red, blue, green, or black light tower.

@cbrxyz
Copy link
Member Author

cbrxyz commented Sep 17, 2024

@alexoj46

Here are the results of your trained model!

confusion_matrix

results

It's not bad, but it could be a little better (it would be good to see [email protected] above 0.75)! I think a good next step would be changing some of the training hyperparameters to try to encourage better learning. If that still doesn't work, we can then get more data, add regularized data, etc. Adjusting the parameters should be an easy first modification for trying to change the performance of the model.

Can you send me the training command you used over Discord?

@cbrxyz
Copy link
Member Author

cbrxyz commented Sep 18, 2024

@alexoj46

It looks like another reason that the training might have failed is because of unbalanced data. The class distribution is the following (found by searching for "Annotation results contain 'your_color Light Tower'" in label studio):

  • Black: 105
  • Green: 8
  • Red: 80
  • Blue: 56

If you're having trouble finding some more data, just let us know and we can try to help! Mechanical has the structure of the real-life light tower ready, but we're working with electrical now to develop the changing color panel of the light tower itself. Hopefully it will be done by mid next week at the latest!

@cbrxyz
Copy link
Member Author

cbrxyz commented Sep 20, 2024

Blocked by uf-mil-electrical/NaviGator#1 and uf-mil-mechanical/tasks#11

@cbrxyz cbrxyz added the blocked label Sep 20, 2024
@alexoj46
Copy link
Contributor

This week, I finished labeling images in label studio and used yolov7-tiny to train a model based on this data. This required first modifying a script provided by Daniel to split images into training, testing, and validation folders, then modifying and running relevant training commands and scripts in the yolov7 directory. Because the provided data spread was not regular (for example, only 8/250 images of a green tower), the results were not as great as I’d hoped (see results above). While I added a few more images from youtube videos I could find, there are not enough clips available to regularize the data, so I am now waiting for the mechanical light tower to be constructed to gather more images. In the meantime, I installed and setup Ubuntu through UTM to be able to access the simulation, etc. in the future for testing, and I will continue to research the next steps for testing and validation of a trained CV model to prepare for when we have more data.

@alexoj46
Copy link
Contributor

This week while waiting for the mechanical tower to gather more image data, I have setup ubuntu desktop as well as added the GitHub repository and relevant developer tools including pre-commit and neovim. I was able to successfully access the simulator in this way. I attended today’s testing session, and was able to connect to and move the boat through ssh and tmux for the first time on my laptop, as well as view the nviz visualizers through ubuntu. I also learned how to collect bagged data from the boat’s camera, in preparation for gathering bags of data on the LED tower, once constructed.

@cbrxyz
Copy link
Member Author

cbrxyz commented Nov 5, 2024

@alexoj46 let's try to get some more data for this model tomorrow! we can get some data from the boat, or the shore!

@cbrxyz cbrxyz removed the blocked label Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants