You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(I'm opening this discussion to discuss the TensorRT support in 18.6.)
Among the new features in this 18.6 release is the following:
• Up to 2x faster Neural Engine performance with Nvidia TensorRT.
Up to 2x faster sounds like a big win!
When starting up 18.6 for the first time in the container, a dialog popped up that said:
Optimize DaVinci Neural Engines
There are one more new DaVinci Neural Engines that need to be optimized for optimal peformance on NVIDIA GPUs. This process might take several minutes.
(the options are Disable/Skip/Optimize)
I don't think TensorRT is installed currently, so I tried adding it manually.
./resolve.sh /bin/bash
This starts the container and puts you in the shell. I then did this:
The install persisted between reloads of the container.
Not sure if this made any difference while doing the optimization, but for what it's worth you can verify that the GPU Optimizations are turned on by selecting:
DaVinci Resolve (top menu) -> Preferences -> Memory and GPU item -> Use neural optimization on NVIDIA (checked)
bash-4.4$ python3
Python 3.9.16 (main, Jul 3 2023, 20:07:32)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
>>> print(tensorrt.__version__)
8.6.1
Did a quick magic mask and the neural stuff still works. Can't tell if it's faster or if it's using tensorrt or not...
Questions: Should TensorRT be added automatically by build.sh? Only via a $RESOLVE_FLAG? Does it make any difference whether it's here or not? Was it installed correctly via pip3 or would an .rpm have been the right way to go?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
(I'm opening this discussion to discuss the TensorRT support in 18.6.)
Among the new features in this 18.6 release is the following:
Up to 2x faster sounds like a big win!
When starting up 18.6 for the first time in the container, a dialog popped up that said:
(the options are Disable/Skip/Optimize)
I don't think TensorRT is installed currently, so I tried adding it manually.
This starts the container and puts you in the shell. I then did this:
This seemed to install the tensorrt library as well as cudnn as seen here:
The install persisted between reloads of the container.
Not sure if this made any difference while doing the optimization, but for what it's worth you can verify that the GPU Optimizations are turned on by selecting:
DaVinci Resolve
(top menu) ->Preferences
->Memory and GPU
item ->Use neural optimization on NVIDIA
(checked)For fun I also added:
At which point I could access tensorrt in python:
Did a quick magic mask and the neural stuff still works. Can't tell if it's faster or if it's using tensorrt or not...
Questions: Should TensorRT be added automatically by
build.sh
? Only via a $RESOLVE_FLAG? Does it make any difference whether it's here or not? Was it installed correctly viapip3
or would an.rpm
have been the right way to go?Your thoughts are much appreciated!
Beta Was this translation helpful? Give feedback.
All reactions