Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark Saliency model on Jetson with CUDA #131

Open
atar13 opened this issue Apr 4, 2024 · 0 comments
Open

Benchmark Saliency model on Jetson with CUDA #131

atar13 opened this issue Apr 4, 2024 · 0 comments
Assignees
Labels
mission-critical MUST HAVE BEFORE COMPETITION testing Needs testing/benchmarking

Comments

@atar13
Copy link
Member

atar13 commented Apr 4, 2024

We should run the saliency model on the Jetson to get an idea of how long this stage of the pipeline takes and to make sure it works fine with CUDA.

This will involve running the cv_saliency integration test and making sure that the model and it's inputs are loaded on the Jetson's GPU with CUDA.

See here for details on loading the model to the GPU with CUDA. It should just be the code here:
image

I think we should have an option in the Saliency constructor to specify if CUDA should be used. There it can check if CUDA is available (torch::cuda::is_available()) and fallback to the CPU if not.

@Tyler-Lentz Tyler-Lentz added testing Needs testing/benchmarking mission-critical MUST HAVE BEFORE COMPETITION labels Apr 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mission-critical MUST HAVE BEFORE COMPETITION testing Needs testing/benchmarking
Projects
None yet
Development

No branches or pull requests

3 participants