Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ffmpeg - Failed to create hwaccel device. Invalid argument #3063

Closed
TechNovation01 opened this issue Oct 23, 2020 · 40 comments
Closed

Ffmpeg - Failed to create hwaccel device. Invalid argument #3063

TechNovation01 opened this issue Oct 23, 2020 · 40 comments

Comments

@TechNovation01
Copy link

Environment:

  • Running docker container - dlandon/zoneminder
  • Zoneminder v1.34.22
  • ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
    built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04).
  • Intel CPU with onboard GPU: Intel HD Graphics 400

Describe the bug
Ffmpeg hardware acceleration settings for monitor decoding are not correctly applied and thereby remain disabled.

To Reproduce
Steps to reproduce the behavior:

  1. Go to and set the following parameters: Monitor > Source > DecoderHWAccelName = vaapi & DecoderHWAccelDevice = /dev/dri/renderD128
  2. Close the Monitor settings window, such that new hw-acceleration settings for Ffmpeg are applied
  3. Error logging shows: ERR | Failed to create hwaccel device. Invalid argument | zm_ffmpeg_camera.cpp | 539
  4. ffmpeg defaults back to not use hw acceleration

Other Steps taken to identify issue

  • Made device /dev/dri/renderD128 available in the docker container.
  • Fixed container issue that in newer versions of Ubuntu /dev/dri/renderD128 is owned by render and added this group to the default zoneminder user www-data.
  • Tested inside the docker container itself while logged in as www-data:
    • log in to container: docker run -it --user www-data zoneminder /bin/bash
    • test if ffmpeg hardware acceleration settings works inside this container by using an earlier recorded mp4 event movie (1000-video.mp4):
      ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i 1000-video.mp4 -f null -
      and this runs succesful without an error and outputs a speed of 207x, while if I run the same command without the hardware acceleration settings it only achieves a speed of 66x and finishes quite a bit slower.
  • Looking at the code where it reports the error zm_ffmpeg_camera.cpp it apparently has an Invalid Argument as input to av_hwdevice_ctx_create
    ret = av_hwdevice_ctx_create(&hw_device_ctx, type,
    (hwaccel_device != "" ? hwaccel_device.c_str(): nullptr), nullptr, 0);
    if ( ret < 0 ) {
    Error("Failed to create hwaccel device. %s",av_make_error_string(ret).c_str());
    hw_pix_fmt = AV_PIX_FMT_NONE;
    } else {

Expected behavior
Hardware acceleration settings are applied in line with Zoneminder help pop-up and should enable hardware acceleration support without error message.

Debug Logs

ERR | Failed to create hwaccel device. Invalid argument | zm_ffmpeg_camera.cpp | 539

So ffmpeg hardware acceleration is confirmed to work correctly within the container, but I'm having problems with an Invalid Argument being passed to av_hwdevice_ctx_create in zm_ffmpeg_camera.cpp.

I hope this helps sufficiently to support in diagnosing the issue.

@welcome
Copy link

welcome bot commented Oct 23, 2020

Thanks for opening your first issue here! Just a reminder, this forum is for Bug Reports only. Be sure to follow the issue template!

@connortechnology
Copy link
Member

Logs at debug level1 should provide more information.
It should actually not be necessary to specify the device, but shouldn't matter unless you've made a typo.

One thing to look for is permissions. /dev/dri/renderD128 in the old days was owned by root:video but in newer systems is root:render and I have no idea who owns it in a docker container. Please ensure that www-data is in whatever group has access to the device.

@TechNovation01
Copy link
Author

@connortechnology Thanks for your response.

  1. I was aware of the permissions "issue" in docker containers and I tried to describe my taken steps under the heading Other Steps taken to identify issue to make sure it was not the cullprit. So indeed inside the container www-data user now belongs to the render (it is no longer video) group and actually running ffmpeg with the hardware acceleration works inside the container for this user. So I don't think it is a permission issue (anymore) - or did I miss something here?

  2. If I don't specify the device as you suggest (so I only fill in DecoderHWAccelName = vaapi & leave the DecoderHWAccelDevice field empty), I get the same error message. Of course if I leave away the vaapi too, the error message dissappears, but then I suppose ffmpeg will also not be using hardware acceleration.

  3. I also tried to extend the logging by:

  • LOG_FFMPEG: checked
  • LOG_DEBUG: checked
  • LOG_DEBUG_TARGET: _zmc_m1
  • LOG_DEBUG_LEVEL: 1
    (and did a restart to make sure these settings were activated).

I did not yet find any additional info in the logging to what could be the problem, but strangely I do see the message:
Not enabling ffmpeg logs, as LOG_FFMPEG and/or LOG_DEBUG is disabled in options, or this monitor is not part of your debug targets

I find this a bit strange because all those settings have been enabled in the logging. So what did I do wrong or forget to enable/check in the logging settings?

@danlsgiga
Copy link

Probably related to #2836 ?

@jmccoy555
Copy link

Appears to be the same with v1.35.16 with AMD GPU (running on XCP-ng, Debian 10 VM with Docker and dlandon/zoneminder.master)..... some nice updates though btw :)

@kabadisha
Copy link

I'm having exactly the same issue, also using the dlandon container (latest) on an ubuntu host.

I suspected permissions issues like OP so I tried creating the render group on the guest container with the same id as the host and then adding the www-data user to that group. No change.

I'm stumped at what else to try. Any ideas welcome.

@kabadisha
Copy link

kabadisha commented Dec 30, 2020

Update:
Just restarted the container and found that I am now getting a different error (with debug enabled):

WAR [zmc_m1] [libva: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed]
WAR [zmc_m1] [Failed to initialise VAAPI connection: -1 (unknown libva error).]
ERR [zmc_m1] [Failed to create hwaccel device. Input/output error]

As such, it looks like adding the render group to the container, matching the host, has made some impact.
chmod'ing /dev/dri/renderD128 to 666 didn't get me any further sadly.

I installed vainfo on the container and it outputs the following:

vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

Still digging...

@kabadisha
Copy link

kabadisha commented Dec 30, 2020

Ok, so I think I figured it out and found a fix.

It seems that VAAPI does not support new Intel chipsets and I am running an Intel 10th Gen or "Comet Lake" processor:
intel/intel-vaapi-driver#499

Additionally, the new flavour from Intel seems to only be available on Ubuntu 20, not 18.

As such, in addition to adding the render group (see above), in the container I had to:

  1. Install Intel drivers from here: https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-bionic.html
  2. Install a backport of the Intel Media Driver: https://launchpad.net/~morphis/+archive/ubuntu/intel-media
    sudo add-apt-repository ppa:morphis/intel-media
    apt install va-driver-all
    

With that done, I ran vainfo and confirmed it ran with no more errors, then restarted the container and checked the logs. No more errors!

It's like 3.30am here now, so gonna get some sleep. Tomorrow I will repeat with a brand new container and check that I can repeat the fix, plus will check which steps are actually necessary.

@kabadisha
Copy link

@dlandon based on my findings, I think this is an issue with your (much appreciated) containerisation of ZoneMinder rather than ZM itself. As such, I thought I would tag you as a potentially interested party.

Not sure if an upgrade to Ubuntu 20 base would resolve this and also not sure how much of a PITA that will be. I'd give it a go if I was more of a whizz with containerisation.

@kabadisha
Copy link

Damn. I just tried to reproduce the fix on a fresh container and I can't seem to repeat it :-(
I can't seem to get vainfo to work this morning:

root@ddd0fc11fd3b:/# LIBVA_DRIVER_NAME=iHD-va-driver vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.5.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD-va-driver'
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD-va-driver_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

@kabadisha
Copy link

Ok, so I figured out the workaround.

The issue seems to be that newer Intel chipsets, like the ones in my 10th Gen i7, are not supported by the drivers and VA-API shipped with the version of Ubuntu that dlandon's docker image is based on (18.04 bionic).
A workaround is to patch in the updated & backported components.

In addition, the render group is not set up by default and the user ZoneMinder runs as (www-data) does not have permissions to use /dev/dri/renderD128

To work around these issues, ssh into your running container with: docker exec -it zoneminder /bin/bash and run the following commands:

# Fix permissions:
groupadd -g 109 render
usermod -a -G render www-data

#Install updated Intel drivers (from Ubuntu 19 eoan) - All three lines should be executed at once.
cat << EOF | tee /etc/apt/sources.list.d/intel-graphics.list
deb [trusted=yes arch=amd64] https://repositories.intel.com/graphics/ubuntu eoan main
EOF

# Install bionic backport:
add-apt-repository --yes ppa:morphis/intel-media

apt update && apt upgrade -y

You can check that it worked by running the following also:

#Test:
apt install vainfo
vainfo

All being well, the vainfo command should spit out something that looks like this:

error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.7.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_5
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.7 (libva 2.7.1)
vainfo: Driver version: Intel iHD driver - 1.0.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :	VAEntrypointVLD
      VAProfileMPEG2Main              :	VAEntrypointVLD
      VAProfileH264Main               :	VAEntrypointVLD
      VAProfileH264Main               :	VAEntrypointEncSliceLP
      VAProfileH264High               :	VAEntrypointVLD
      VAProfileH264High               :	VAEntrypointEncSliceLP
      VAProfileJPEGBaseline           :	VAEntrypointVLD
      VAProfileJPEGBaseline           :	VAEntrypointEncPicture
      VAProfileH264ConstrainedBaseline:	VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:	VAEntrypointEncSliceLP
      VAProfileVP8Version0_3          :	VAEntrypointVLD
      VAProfileHEVCMain               :	VAEntrypointVLD
      VAProfileHEVCMain10             :	VAEntrypointVLD
      VAProfileVP9Profile0            :	VAEntrypointVLD
      VAProfileVP9Profile2            :	VAEntrypointVLD

Re-boot your container and you should be good to go!

While the above does solve the problem, I think it would probably be better to update dlandon's container to use a more up to date base ubuntu version.

I've never built a container myself, but if I find the energy tomorrow I may fork his repo and give it a go. I can't imagine that ZoneMinder will care if we change the underlying Ubuntu version to 20. Will see how many errors the log throws out I guess!

Hope this is useful.

@TechNovation01
Copy link
Author

@kabadisha Good work and thanks for looking into this issue.
Similarly to you (as mentioned in my initial post under Other Steps taken to identify issue) I fixed the issue of the permissions inside the container for the renderD128:

  • Made device /dev/dri/renderD128 available in the docker container.
  • Fixed container issue that in newer versions of Ubuntu /dev/dri/renderD128 is owned by render and added this group to the default zoneminder user www-data.

At that moment in time I could even use ffmpeg from the command line with the hwaccel enabled inside the Zoneminder docker container and functioned correctly (actual speed increase), but still gave errors in the Zoneminder application itself (running in the same container).

Unfortunately I'm using a N3150 Intel processor, so a bit older generation than yours. So I'm curious if this approach will also fix my original issue. Currently a bit tight on time, but I hope to be able to give it a try somewhere early in the new year.

@kabadisha
Copy link

FYI, I created a PR for an elegant fix to the permissions issue, inspired by how the guys at linuxserver solved the problem for their Plex image.

I have also tried upgrading to the alpha focal release of phusion/baseimage. Looks like it works fine and fixes the support for newer chipsets. Will edit this comment with a link when I figure out how to get dockerhub to build it.

@jmccoy555
Copy link

jmccoy555 commented Dec 31, 2020

@kabadisha Good work! Fingers crossed this solves my GPU issue too...... If you've built it locally you can just push the image docker hub. Quicker than waiting for it to build it.

@dlandon
Copy link

dlandon commented Dec 31, 2020

The phusion/baseimage for Focal is alpha. I may try to implement it in the my master docker and see if presents any problems. I do appreciate all the enthusiasm for the latest version of Ubuntu, but I can't risk support issues for the thousands of Zoneminder users that this might break. Since this is done in my spare time, I don't have a lot of time for support, especially since very few users feel my efforts are valuable enough to contribute. I'll do what I can when I can.

If you go off and fork my work and do your own thing and then make it available to the general public, be prepared for all the support that will ensue. Trust me, it's a lot of thankless work.

If someone is willing to fund this effort, I'd be more than happy to put more time into it.

@kabadisha
Copy link

kabadisha commented Dec 31, 2020

No problemo my friend and thanks for the advice. I will bear it in mind.

I'm actually not that fussed about running the latest and greatest flavour of Ubuntu. If it ain't broke, don't fix it.
The reason I was giving it a go was because 18.04 doesn't seem to have native support for more modern Intel (and I assume other vendor) chipsets.
I worked around that (see above) with a really quite dirty 'hack' by slurping updates from eoan repos and using some backported stuff, however that didn't feel exactly neat.

As such, I tried updating to the alpha Focal release of phusion/baseimage, partly to check my hypothesis that an updated dist would remove the problem, but mostly for the intellectual exercise.

It's been a brain-stretching couple of days, but fun. I taught myself the basics of how to build Docker containers today by doing this. Even got dockerhub to auto-bobulate from my forked repo.

If anyone is interested in trying my forked version running on the alpha Focal release of phusion/baseimage, you can find it here.

To use it, pull the tag: kabadisha/zoneminder:release-v1.35-alpha

Source here: https://github.com/kabadisha/zoneminder

UPDATE: @dlandon has very kindly now updated the development/alpha release of the ZoneMinder image to use the alpha Focal release of phusion/baseimage and has also included my fix for the permissions issue, which is fab. As such, if you want to try out these updates, please use that version (not my forked one, which I will now remove to avoid confusion).
You can use it with: docker pull dlandon/zoneminder.master

Source here: https://github.com/dlandon/zoneminder.master-docker

I have fired it up and have no issues so far. I can confirm that it does also fix the chipset support issue I was having.

HOWEVER I have not tried any of the fancy stuff like YOLO. All environment variables for those features are currently set to 0. I will try them later.

@dlandon having now spent some decent time with the code myself, I can see how much effort went into it. You can expect a little late Christmas gift as soon as I locate the donate button that I'm sure I saw somewhere. Thanks for creating and maintaining this image - really appreciate it :-)

Happy new year everyone!

@dlandon
Copy link

dlandon commented Dec 31, 2020

I'm going to update my master docker to Focal and put it out there for some testing.

@kabadisha
Copy link

Cool - no pressure btw.

Maybe just tag it as an alpha pre-release in github so that people who are pulling from :latest don't get it automatically.
Apologies if I'm teaching you to such eggs BTW - this is all pretty new to me.

For those with the problem in this specific issue - you will still have permissions problems until @dlandon has had a chance to review my PR (again, no pressure intended), but you can try my fix for the permissions yourself by following the instructions in my comment on a related issue here: dlandon/zoneminder.machine.learning#105 (comment)

@kabadisha
Copy link

Oh, BTW @dlandon the only change I had to make other than changing the base image, was to remove the ffmpeg backport, since it isn't needed any more. Focal has v4 available natively.
Let me know if you want me to test anything - I don't mind being a guineapig.

@dlandon
Copy link

dlandon commented Dec 31, 2020

I appreciate your efforts here, but give me some time to get together the master docker with focal and apply some fixes that have come up. I will then put that out there for testing so we can control the feedback and changes that will come up. You should then mark your docker as private or remove it as it will cause a lot of confusion. The two dockers I maintain are official and yours will lead to a lot of confusion and my support because my name is in yours as the maintainer. Let's keep this effort in one place and under control.

@dlandon
Copy link

dlandon commented Dec 31, 2020

Yes, that makes sense. I am working on a new build and will do some initial testing. I'll also include your PR for the 1.34 docker.

@kabadisha
Copy link

kabadisha commented Dec 31, 2020

Fabulous and sure thing about making my repo private. I don't want to cause any chaos :-)
Sorry if I should have changed the maintainer - I didn't spot that. Will do it when I get a sec and also add a warning at the top of the readme.md on my repo.

@dlandon
Copy link

dlandon commented Jan 1, 2021

The master docker is building right now. It is updated to Focal (Ubuntu 20.04). I don't have any way to test hook processing, so hopefully someone here can give it a spin and let me know if any issues come up.

@kabadisha
Copy link

Noice!
It's just gone midnight here and I'm pretty drunk right now, so probably gonna wait till the morning. Actually, better make that the afternoon...

Cheers!

@dlandon
Copy link

dlandon commented Jan 1, 2021

Probably a good idea. Wait until you can think straight.

@kabadisha
Copy link

Ok, so I managed to avoid too bad a hangover thank goodness!
As such, I just tried the updated master and initial indications are good - the permissions fix seems to be working fine and decoder hardware acceleration seems to be working great, which is excellent. All the ZoneMinder updates seem to have applied without issue also. Bonus.

I've also closed down my fork, removed from DockerHub and updated my comment above to avoid confusion for people.

Maybe just tag it as an alpha pre-release in github so that people who are pulling from :latest don't get it automatically.

@dlandon Apologies, I only just realised that you are maintaining a separate repo for 'master'. Not seen that before, so threw me a little. I guess it's an effective way to make sure people don't accidentally end up on an unstable release.

If I have any future changes to propose, should I raise PRs against that repo instead?
Also, what's the best forum for discussing changes with the community? Is it via a PR comment thread? - I'd rather adopt your conventions so as not to be disruptive.

Cheers

@multijohn
Copy link

Hello,
So , for my system with skylake i7 and Ubuntu 16.04 updating the drivers will make zoneminder to work with vaapi?
Ffmpeg alone, works great!
What zoneminder can't see with old drivers?
Thank u!

@dlandon
Copy link

dlandon commented Jan 1, 2021

@kabadisha The master docker was built to try out the newest under development Zoneminder version. If it's bleeding edge stuff or new ideas, that would probably be the best place to discuss.

I am preparing a new master docker that has the pre-release version 6.1.0 of ES. I'm in the final stages of testing and will release a new docker within a few hours. Give that a spin and let me know how it goes. I have no way of testing hook processing.

The Focal version of Ubuntu looks like it will not present any problems. One minor issue is that the syslog-ng doesn't seem to be the latest version and I haven't been able to find an appropriate package. Shouldn't be a show stopper though. I suspect phusion will update it in the next release.

@multijohn It's my understanding that Ubuntu 20.04 is needed for the latest hardware drivers.

@kabadisha
Copy link

kabadisha commented Jan 1, 2021

Hey @multijohn,
The discussion above primarily concerns those of us using dlandon's excellent docker image, so probably not directly relevant to you if you are just running ZoneMinder directly on your Ubuntu box.

However, you may find that the version of Ubuntu you are using does not have support for the version of Intel drivers and vaapi to play nice with your processor.
You can test pretty easily by running vainfo on the command line.
If you get errors like I was seeing, check the permissions on your /dev/dri/* devices. If that isn't the issue, then you may well be facing an upgrade to a newer version of Ubuntu.

HOWEVER any performane increase is likely to be marginal at best, so I wouldn't bother if your system is coping fine at the moment.

@kabadisha
Copy link

@dlandon will do!
Full disclosure: I also don't have a way to test hook processing yet - I plan on playing with that next now that I have all the basics working. ZoneMinder is rather more complicated than I anticipated.

Initially I just want to send a notification to discord with the video of the event, but later I will probably play with the ML stuff for funsies.

@kabadisha
Copy link

@TechNovation01 maybe we should close this issue now that we know why it is happening and how to fix it?
It doesn't seem to be a defect with ZoneMinder itself..

Let us know if the fix worked for you too.

@jmccoy555
Copy link

Morning... all appears to be working with the latest master. Object detection is detecting and not complaining.

Going to check a fresh install now and also try with my AMD GPU.

@dlandon
Copy link

dlandon commented Jan 1, 2021

Just built a new docker that has the ES 6.1.0 prelease installed. Give it a try.

@jmccoy555
Copy link

With the version built a 14 hours ago AMD GPU acceleration is working.

Verified by watching watch -n 0.5 cat /sys/kernel/debug/dri/1/amdgpu_pm_info before and after adding vaapi to the source setting.

This is a VM with the GPU passed thought, then passed through to Docker so its the only things using it (/dri/1 will probably be 0 in most cases).

This issue appears resolved. Will try the latest build now and update on the other issues in the relative threads.

@kabadisha
Copy link

Hey Guys,
So I have been doing a bit of testing and may have found an issue, but not sure - my Google-fu isn't giving me much and reading the relevant code reminded me how long it's been since I was a dev.

When I enable hardware decoding on an rtsp (h264) stream, I sporadically get errors like this:
ERR [zmc_m1] [Unable to transfer frame at frame 2620: Input/output error, continuing]

Looking at the code itself, it seems to be an issue when transferring data from the GPU to the CPU.
It calls an underlying ffmpeg function av_hwframe_transfer_data and reading the docs for that suggests that function must be returning an AVERROR error code which is translated to the rather unhelpful Input/output error.

Any ideas for how I might diagnose further?

@kabadisha
Copy link

Foolishly, I just realised I should have enabled debug to get more juicy logs. Just did that now:

19:06:20 e1aff10c2b72 zmwatch[996]: WAR [Restarting capture daemon for Lounge, time since last capture 54 seconds (1609614380-1609614326)]
19:06:20 e1aff10c2b72 zmdc[946]: INF ['zmc -m 1' sending stop to pid 975 at 21/01/02 19:06:20]
19:06:20 e1aff10c2b72 zmc_m1[975]: WAR [zmc_m1] [Failed to sync surface 0x13: 23 (internal decoding error).]
19:06:20 e1aff10c2b72 zmc_m1[975]: ERR [zmc_m1] [Unable to transfer frame at frame 1262: Input/output error, continuing]
19:06:20 e1aff10c2b72 zmdc[946]: INF ['zmc -m 1' exited normally]
19:06:21 e1aff10c2b72 zmdc[946]: INF [Starting pending process, zmc -m 1]
19:06:21 e1aff10c2b72 zmdc[946]: INF ['zmc -m 1' starting at 21/01/02 19:06:21, pid = 1063]
19:06:21 e1aff10c2b72 zmdc[1063]: INF ['zmc -m 1' started at 21/01/02 19:06:21]
19:06:22 e1aff10c2b72 zmc_m1[1063]: INF [zmc_m1] [Enabling ffmpeg logs, as LOG_DEBUG+LOG_FFMPEG are enabled in options]
19:06:22 e1aff10c2b72 zmc_m1[1063]: INF [zmc_m1] [Starting Capture version 1.35.16]

Based on the above, I am beginning to suspect that these interruptions are caused by the source camera having some kind of wobble, which causes the capture daemon to restart. Unless anyone has any better ideas, I'll chalk it up to that.

For context, the source cameras are Raspberry Pis with camera modules running motionEyeOS in 'Fast Network Camera' mode, which basically just takes the native 1080p h.264 stream output by the camera module and spits it out on an rtsp stream. Works rather well for such cheap devices.

It was also a hell of a lot easier and more reliable than trying to stream directly using ffmpeg and some other libraries. Burned a full day down that particular rabbit hole...

@kabadisha
Copy link

I would also be interested if anyone on this thread saw any performance benefit from getting the hardware acceleration working.
For my use case, enabling hardware decoding actually seems to measurably increase cpu utilisation and load average (as measured by htop) rather than decrease it.
I have a feeling that's because the feeds are encoded as h.264 anyway, so don't actually need to be transcoded in any way...

@jmccoy555
Copy link

@kabadisha yeah similar story here. Which is good in a way becaise I'm thinking of moving my ZoneMinder into Kubernetes (which could be fun!) and not having it tied to a GPU will make much more sense.

@TechNovation01
Copy link
Author

@kabadisha

@TechNovation01 maybe we should close this issue now that we know why it is happening and how to fix it?
It doesn't seem to be a defect with ZoneMinder itself..

Let us know if the fix worked for you too.

Been doing some testing again on the zoneminder.master-docker with my configuration.
So far no success unfortunately:

  • when I run vainfo within the docker container, I do get:
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.7.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_7
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_6
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.7 (libva 2.7.1)
vainfo: Driver version: Intel i965 driver for Intel(R) CherryView - 2.4.0
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Simple            : VAEntrypointEncSlice
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointEncSlice
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileH264MultiviewHigh      : VAEntrypointVLD
      VAProfileH264MultiviewHigh      : VAEntrypointEncSlice
      VAProfileH264StereoHigh         : VAEntrypointVLD
      VAProfileH264StereoHigh         : VAEntrypointEncSlice
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointEncPicture
      VAProfileVP8Version0_3          : VAEntrypointVLD
      VAProfileVP8Version0_3          : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
  • When I run a hwaccelerated ffmpeg test
    ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i 1000-video.mp4 -f null -
    from within the docker container it also appears to work
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '1000-video.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1iso6mp41
    title           : Zoneminder Security Recording
    encoder         : Lavf57.83.100
  Duration: 00:10:01.20, start: 0.000000, bitrate: 333 kb/s
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x800, 332 kb/s, 5 fps, 5 tbr, 90k tbn, 180k tbc (default)
    Metadata:
      handler_name    : VideoHandler
[AVHWDeviceContext @ 0x558a58085d40] libva: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> wrapped_avframe (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1iso6mp41
    title           : Zoneminder Security Recording
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: wrapped_avframe, vaapi_vld, 1280x800, q=2-31, 200 kb/s, 5 fps, 5 tbn, 5 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc58.54.100 wrapped_avframe
frame= 3006 fps=1029 q=-0.0 Lsize=N/A time=00:10:01.20 bitrate=N/A speed= 206x    
video:1573kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
  • However in Zoneminder I still get the following log messages when setting vaapi (the last line is new to me from the zm-ffmpeg.cpp with No VA display found for device)
 zmc_m1| Failed to create hwaccel device. Invalid argument | zm_ffmpeg_camera.cpp | 545
 zmc_m1 | No VA display found for device /dev/dri/renderD128. | zm_ffmpeg.cpp | 70

Still trying some other checks/changes, but with my CPU it is a bit slow on re-building the docker container (opencv takes ages) and that is why I thought some hwaccel could help my system (but listening on this thread it may not bring the boost I'm hoping for).
So I'm still open for hints based on above results, but if this just appears to be an issue with only my "config" (or my lack of sufficient knowledge on the topic) I'm OK with closing the issue if that is preferred.

@TechNovation01
Copy link
Author

I no longer have the error messages, so it appears to have been fixed.
Unfortunately it does not really appear to provide any substantial reduction in CPU usage.

I'm however happy to see that in release v1.35 the option is introduced to disable Decoding (option Decoding Enabled can be disabled on the General Monitor tab) when in Record or Nodect mode and using H264Passthrough with no jpegs being saved. This does really reduce cpu usage .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants