-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
comparison with other baseline methods(random, entropy) #86
Comments
For random sampling, you can use For entropy sampling, you can calculate the information entropy based on the confidence score output by the model, and use it as the uncertainty. |
Dear Author, For entropy sampling based method, it will be great if you can give some insights on confidence score predicted by model;
Should we follow step 1 or step 2 for entropy based sampling ? Thank you for your time and consideration. |
The confidence score is |
Ok, thank you.
…On Thu, Apr 20, 2023 at 4:25 AM Tianning Yuan ***@***.***> wrote:
The confidence score is y_head_cls, which has been described sufficiently
and clearly in the paper and code.
—
Reply to this email directly, view it on GitHub
<#86 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDADHDSGORVMUWDNGMSQKLXCD6JXANCNFSM6AAAAAAUTH5FQI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
#43
Dear Author,
i want to generate some results for baseline methods, such as random and entropy. I saw your following response for issue #43.
Hi,
The code with 2 baselines (random and entropy) is easy to implement. You only need to modify the function calculate_uncertainty in mmdet/apis/test.py.
For 2 other approaches (Core-set and CDAL), please refer to here (Core-set) and here (CDAL).
It should be noted that all other methods do not use the two adversarial classifiers and the MIL classifier.
Hope this is useful for you :)
My query regarding the issue:
I have a query regarding the modification in the function [calculate_uncertainty] under (https://github.com/yuantn/MI-AOD/blob/master/mmdet/apis/test.py#L15) in mmdet/apis/test.py.
Currently, for MI-AOD, it is estimating uncertainty as:
loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)
uncertainty_all_N = loss_l2_p.mean(dim=1)
arg = uncertainty_all_N.argsort()
uncertainty_single = uncertainty_all_N[arg[-cfg.k:]].mean()
uncertainty[i] = uncertainty_single
Can you suggest a way to modify this funcitojn? will using uncertainty_all_N without uncertainty_all_N .argsort() work for Random baseline method ?
If so, wont all the minimizaiton amd maximization of uncertainty be same as MI-AOD ?
It will be great if you can suggest proper modification for entropy based method as well.
Thank you for your time and consideration.
The text was updated successfully, but these errors were encountered: