You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should run the saliency model on the Jetson to get an idea of how long this stage of the pipeline takes and to make sure it works fine with CUDA.
This will involve running the cv_saliency integration test and making sure that the model and it's inputs are loaded on the Jetson's GPU with CUDA.
See here for details on loading the model to the GPU with CUDA. It should just be the code here:
I think we should have an option in the Saliency constructor to specify if CUDA should be used. There it can check if CUDA is available (torch::cuda::is_available()) and fallback to the CPU if not.
The text was updated successfully, but these errors were encountered:
We should run the saliency model on the Jetson to get an idea of how long this stage of the pipeline takes and to make sure it works fine with CUDA.
This will involve running the
cv_saliency
integration test and making sure that the model and it's inputs are loaded on the Jetson's GPU with CUDA.See here for details on loading the model to the GPU with CUDA. It should just be the code here:
I think we should have an option in the Saliency constructor to specify if CUDA should be used. There it can check if CUDA is available (
torch::cuda::is_available()
) and fallback to the CPU if not.The text was updated successfully, but these errors were encountered: