You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we are not using the error on the posterior probability while determining whether a candidate is rejected. Can we check if the errors on the false negatives from the known lenses are large?
The text was updated successfully, but these errors were encountered:
However, most of the known lenses in stage 2 that were rejected received the full 50 classifications that any subject was to receive in stage 2. (There were a couple that had fewer.)
So that makes me wonder -- how do we know that 50 observations is, on average, 'good enough' to reject a subject? We ought to be able to derive some relation for the error in probability as a function of expected values and variances of PL and PD. That way, if you have several high skilled viewers, you should be able to show that you need fewer observations. (And conversely if you have many low skilled views, that you might need more before you definitively reject an image.)
@cpadavis in reality, the missed known lenses were all missed at stage 1. so, what you are saying is correct and so, this should be tested at stage 1 not at stage 2
Yes the idea was doing this at stage 1 itself. For every subject we have samples from the current posterior probability distribution which give an idea of the scatter. In principle we can delay retirement until a significant fraction of the samples all get rejected.
I was hoping this might be used to implement @anupreeta27 's finding that using high power users improves the false negative rates.
Currently we are not using the error on the posterior probability while determining whether a candidate is rejected. Can we check if the errors on the false negatives from the known lenses are large?
The text was updated successfully, but these errors were encountered: