Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of the error on the posterior probability while retiring #158

Open
surhudm opened this issue Apr 24, 2015 · 4 comments
Open

Use of the error on the posterior probability while retiring #158

surhudm opened this issue Apr 24, 2015 · 4 comments

Comments

@surhudm
Copy link
Collaborator

surhudm commented Apr 24, 2015

Currently we are not using the error on the posterior probability while determining whether a candidate is rejected. Can we check if the errors on the false negatives from the known lenses are large?

@cpadavis
Copy link
Collaborator

I like this idea!

However, most of the known lenses in stage 2 that were rejected received the full 50 classifications that any subject was to receive in stage 2. (There were a couple that had fewer.)

So that makes me wonder -- how do we know that 50 observations is, on average, 'good enough' to reject a subject? We ought to be able to derive some relation for the error in probability as a function of expected values and variances of PL and PD. That way, if you have several high skilled viewers, you should be able to show that you need fewer observations. (And conversely if you have many low skilled views, that you might need more before you definitively reject an image.)

@anupreeta27
Copy link
Collaborator

@cpadavis in reality, the missed known lenses were all missed at stage 1. so, what you are saying is correct and so, this should be tested at stage 1 not at stage 2

@cpadavis
Copy link
Collaborator

@anupreeta27 ah yes of course you're right I think I was just thinking of those ones that made it to stage 2 and were inconclusive :)

@surhudm
Copy link
Collaborator Author

surhudm commented Apr 24, 2015

Yes the idea was doing this at stage 1 itself. For every subject we have samples from the current posterior probability distribution which give an idea of the scatter. In principle we can delay retirement until a significant fraction of the samples all get rejected.

I was hoping this might be used to implement @anupreeta27 's finding that using high power users improves the false negative rates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants