-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build: Update ffmpeg config for CUDA #1208
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but gave ERROR: cuda_llvm requested but not found
in CI
install_ffmpeg.sh
Outdated
@@ -103,7 +103,7 @@ EXTRA_LDFLAGS="" | |||
if [ $(uname) == "Linux" ]; then | |||
if [ -e /usr/local/cuda/include ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a good check anymore, as clang
doesn't require CUDA as a build-time dependency. I went with
if which clang > /dev/null; then
Really what we want to check for is clang >= 8.0, but I dunno how to do that in shell.
Both concerns should be addressed with the merge here of #1207 - let's see if CI passes. |
It did! 🎉 |
Adds the NotifySegment.Job field.
This is needed to identify a persistent session across segments.
* Update ffmpeg to use Livepeer forked version * Add h264_cuvid to ffmpeg configure. * Use LLVM instead of cuda nvcc * Update unit tests that break as a result of the ffmpeg update * Misc configure flag cleanup
switched to the llvm method of building cuda. apparently we don't even need cuda as a build-time dependency, now? that's pretty neat.
Rebased and merged, thanks! |
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
The recent change from `nvcc` to `clang` for the nvidia-flavored parts of our ffmpeg build chain [1] basically gave us this feature for free. All that was necessary was to add the mingw64 versions of the clang compiler and it started happily producing binaries that interface with CUDA on Windows. The rest was just refactoring: I removed the Windows Docker build entirely. It was terrible, frequently taking close to 90 minutes per build and frequently crashed. This change implies that `docker/Dockerfile.build` and `docker/Dockerfile.build-linux` can be combined back into one Dockerfile.build process; I'll open a tech debt ticket for that but I don't think it's major enough to block this merge. Instead, we have a `.\windows-build.ps1` PowerShell script that takes care of downloading MSYS2 using [Chocolatey](https://chocolatey.org/), installing the necessary MSYS2/mingw64 packages, and running through the build. It unpacks everything into a local .gitignored directory, so it should be able to handle producing workspace-local binaries without mucking with any system-level packages installed on the host. I've tested this locally at my desk with a Windows T, Linux O, and macOS B just to prove that it's possible. I have not done any kind of extensive benchmarking work, but I did confirm that the `ja/lb` changes functioned as expected with the appropriate Nvidia drivers. This is the conclusion of a 10-month side project and I'm pretty stoked 😃 [1]: #1208
What does this pull request do? Explain your changes. (required)
Splits out the ffmpeg-related updates from #1124 in order to make that PR less unwieldy.
Specific updates (required)
How did you test each of these updates (required)
Existing unit tests
Does this pull request close any open issues?
Checklist:
./test.sh
pass