You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2023. It is now read-only.
I'm getting high confidence false positives sometimes when detecting humans, it can either be caused by a shadow, a dog, or others. I am wondering if there is an easy way to filter those out. For example - have a configuration parameter indicating how many frames are sent to the API before the detection can be trusted. If my dog looks 80% like a human in one frame, but then it's missing in the second - probably it was a false positive. Can we implement something like that?
The text was updated successfully, but these errors were encountered:
I’m for this as well. I’m currently using a combo of frigate, DeepStack, CompreFace, and Double Take, and I still get false positives. With only 3 models trained DeepStack over CompreFace usually has the wrong entry. Thus the reason I added another detector to check against. That way both detectors have to come back positive before the rest of my automations move.
On Jan 20, 2022, at 3:13 AM, grinco ***@***.***> wrote:
I'm getting high confidence false positives sometimes when detecting humans, it can either be caused by a shadow, a dog, or others. I am wondering if there is an easy way to filter those out. For example - have a configuration parameter indicating how many frames are sent to the API before the detection can be trusted. If my dog looks 80% like a human in one frame, but then it's missing in the second - probably it was a false positive. Can we implement something like that?
—
Reply to this email directly, view it on GitHub<#265>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACGXNJA6JNGEEENBI3VQJBLUW7G2RANCNFSM5MMFR4FQ>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
In general, you need to raise the confidence threshold, say to 90%
Ensemble approach is interesting, am makes sense if some models perform better under different conditions, e.g. at night. However managing the ensemble can be complex
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm getting high confidence false positives sometimes when detecting humans, it can either be caused by a shadow, a dog, or others. I am wondering if there is an easy way to filter those out. For example - have a configuration parameter indicating how many frames are sent to the API before the detection can be trusted. If my dog looks 80% like a human in one frame, but then it's missing in the second - probably it was a false positive. Can we implement something like that?
The text was updated successfully, but these errors were encountered: