You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As identified in #70, using frame-skip also results in the same frames being skipped in the resulting video output. This is undesirable, and will require some changes so that we avoid frame skipping when we're in a motion event.
To accomplish this correctly, proper seeking needs to be added, since we need to go back and re-process all the frames we skipped so that --time-before can be respected. Another issue also arises when using --bounding-box, since we would need to interpolate the box between frames.
A better and more robust solution might be to just start processing every frame once we detect some motion, and only use frame skipping when we are looking for events, with a slight performance penalty. Although this would be a slight performance hit, encoding the actual video is more computationally expensive than the background subtraction itself, so this approach has some merit.
The text was updated successfully, but these errors were encountered:
This might actually be easier if video output is first moved to a separate process as part of #52, and integration with PySceneDetect's VideoManager (pin to v0.5.6.1 for now).
Breakthrough
changed the title
Video output when using --frame-skip should include all source frames
Video output when using --frame-skip should include all frames
Feb 15, 2022
To go about this, when we seek backwards, all computations of the frame mask should set learning rate to 0 so we don't update the background mask. From that point forward everything should proceed as normal.
This isn't fully supported yet using the default OpenCV output mode, but this should now function correctly in v1.5 if you use the -m ffmpeg or -m copy flags.
As identified in #70, using frame-skip also results in the same frames being skipped in the resulting video output. This is undesirable, and will require some changes so that we avoid frame skipping when we're in a motion event.
To accomplish this correctly, proper seeking needs to be added, since we need to go back and re-process all the frames we skipped so that
--time-before
can be respected. Another issue also arises when using--bounding-box
, since we would need to interpolate the box between frames.A better and more robust solution might be to just start processing every frame once we detect some motion, and only use frame skipping when we are looking for events, with a slight performance penalty. Although this would be a slight performance hit, encoding the actual video is more computationally expensive than the background subtraction itself, so this approach has some merit.
The text was updated successfully, but these errors were encountered: