-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running two pipelines using rpicamsrc (one for each camera) causes internal data stream error #77
Comments
I wonder if it's the timeout thing again - 2 streams contending enough to occasionally cause a timeout requesting a frame? |
Also note, I don't have a compute module - I can't test a 2 camera setup. |
I've tested with various timeouts, and increasing to 1000ms does make it better, but I still get the error, albeit less often. I tried at the extremes, 100ms and 5000ms, and both fail immediately with the error, even with a single stream running. What does this timeout actually do? And is there a portion of the raspivid code I could look at to see why raspivid works with both cameras simultaneously? |
I don't know what those results mean. They could mean that the timeout is unrelated, and that the failure is random, or it could mean that the 2 calls interfere with each other. rpicamsrsc is a port of the Raspivid code into a GStreamer plugin, so the code is largely the same. There's no global state I can think of in the plugin that implies 2 can't coexist. My instinct is that the main difference is likely to be that 2 raspivid instances run in separate address space, and that there may be some global state inside the (closed source) mmal layer that means they interfere when running 2 copies inside the same process. I'm not sure how to prove that though |
Do you think the Raspberry Pi devs in the Raspberry Pi forum might know? I'll ask and check back in here |
Wait - what I said doesn't make sense. Your failing example is running 2 separate gst-launch-1.0 instances with each one accessing a camera, not a single instance with 2 rpicamsrc inside. That should be pretty much the same as running 2 raspivid tasks, so I'm not sure how they're interfering with each other. |
Can you think of any relevant differences between the operation of the raspivid and rpicamsrc code? I know you replicated the code, but even a small difference may have caused this. |
There are a fair few differences, but I deliberately kept the code as unchanged as possible to ease future merges from raspivid. The only significant change that I think can really cause an error at runtime is using a call with timeout when requesting a frame - hence me asking what effect adjusting that timeout has. |
It seems like the timeout definitely has something to do with it, what method does raspivid use to capture frames? Do they not use a timeout? |
raspivid uses mmal_queue_wait instead of mmal_queue_timedwait(). Perhaps the timedwait can get interrupted before it actually reaches the timeout sometimes? |
I just pushed 6025129 to improve the error if it is the timeout that's failing so it's easier to identify. |
I've spent the last few hours testing the new code doing multiple trials to confirm my results. I am now getting a new error, and it is as random as the last one. I had one trial where both streams were successfully going for 20 minutes without error. Most trials ended around the 5 minute mark when one stream gives this error:
However, I had one trial where both streams failed with this error at the same time. Thanks for trying to solve this timeout issue, let me know if there's any tests or I can do to help or anything else. |
Did you try the suggestion above? Replace the call to mmal_queue_timedwait() with a call to mmal_queue_wait() ? |
I have replaced mmal_queue_timedwait() with mmal_queue_wait() as in your suggestion above, now I don't get an error at all when one of the streams fails - it just crashes the pipeline and I am not able to properly exit the pipeline (set state to READY/NULL). I have to interrupt the program with a SIGINT. |
OK - so it seems like the underlying MMAL layer really does fail to deliver a buffer then. I don't know any other difference in the rpicamsrc port of raspivid that could account for that |
Interesting, well I'm not going to scour both code bases to find the problem (at least not yet). I think for now I'm just going to restart the pipeline when it fails application-side. I might make a raspberry pi forum post asking the devs why this problem may exist, if I find anything I will report back here. |
@andrewloomis Try to use the queue chain in gstreamer with the following options: |
I have no problem streaming with one camera, using a command like:
gst-launch-1.0 -v rpicamsrc name=rpi0 bitrate=2000000 preview=0 hflip=1 vflip=1 camera-number=1 ! video/x-h264, width=896, height=900, framerate=40/1 ! h264parse ! queue ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.174 port=5003 sync=false
But when I try to run another pipeline with the other camera at the same time:
gst-launch-1.0 -v rpicamsrc name=rpi2 bitrate=2000000 preview=0 hflip=1 vflip=1 camera-number=0 ! video/x-h264, width=896, height=900, framerate=40/1 ! h264parse ! queue ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.174 port=5002 sync=false
The pipelines both work for a few seconds, but then one stream crashes with the error:
And the other stream goes on just fine. What is causing this error? For reference, the raspivid equivalent of these commands work just fine being run at the same time:
raspivid -v -cs 1 -ih -vf -hf -t 0 -h 900 -w 896 -b 2000000 -fps 40 -n -lev 4.1 -o - | gst-launch-1.0 -v fdsrc ! "video/x-h264, width=(int)896, height=(int)900, framerate=(fraction)40/1" ! h264parse ! queue ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.174 port=5002 sync=false
raspivid -v -cs 0 -ih -vf -hf -t 0 -h 900 -w 896 -b 2000000 -fps 40 -n -lev 4.1 -o - | gst-launch-1.0 -v fdsrc ! "video/x-h264, width=(int)896, height=(int)900, framerate=(fraction)40/1" ! h264parse ! queue ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.174 port=5003 sync=false
The text was updated successfully, but these errors were encountered: