Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

comparison with other baseline methods(random, entropy) #86

Closed
chiran7 opened this issue Feb 6, 2023 · 4 comments
Closed

comparison with other baseline methods(random, entropy) #86

chiran7 opened this issue Feb 6, 2023 · 4 comments
Labels
good first issue Good for newcomers

Comments

@chiran7
Copy link

chiran7 commented Feb 6, 2023

#43

Dear Author,

i want to generate some results for baseline methods, such as random and entropy. I saw your following response for issue #43.
Hi,

The code with 2 baselines (random and entropy) is easy to implement. You only need to modify the function calculate_uncertainty in mmdet/apis/test.py.

For 2 other approaches (Core-set and CDAL), please refer to here (Core-set) and here (CDAL).

It should be noted that all other methods do not use the two adversarial classifiers and the MIL classifier.

Hope this is useful for you :)

My query regarding the issue:

I have a query regarding the modification in the function [calculate_uncertainty] under (https://github.com/yuantn/MI-AOD/blob/master/mmdet/apis/test.py#L15) in mmdet/apis/test.py.

Currently, for MI-AOD, it is estimating uncertainty as:
loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)
uncertainty_all_N = loss_l2_p.mean(dim=1)
arg = uncertainty_all_N.argsort()
uncertainty_single = uncertainty_all_N[arg[-cfg.k:]].mean()
uncertainty[i] = uncertainty_single

Can you suggest a way to modify this funcitojn? will using uncertainty_all_N without uncertainty_all_N .argsort() work for Random baseline method ?

If so, wont all the minimizaiton amd maximization of uncertainty be same as MI-AOD ?

It will be great if you can suggest proper modification for entropy based method as well.

Thank you for your time and consideration.

@yuantn
Copy link
Owner

yuantn commented Feb 20, 2023

For random sampling, you can use torch.rand() to create a random uncertainty tensor.

For entropy sampling, you can calculate the information entropy based on the confidence score output by the model, and use it as the uncertainty.

@chiran7
Copy link
Author

chiran7 commented Mar 7, 2023

Dear Author,

For entropy sampling based method, it will be great if you can give some insights on confidence score predicted by model;
The output of the model in test.py is listed as:
y_head_f_1, y_head_f_2, y_head_cls = model(return_loss=False, rescale=True, return_box=return_box, **data)

  1. Can we use loss_l2_p estimated as below in place of confidence score ?
    loss_l2_p = (y_head_f_1 - y_head_f_2).pow(2)
    Then, estimating entropy based uncertainty using confidence score of model as:
    uncertainty_all_N = Categorical(probs = loss_l2_p).entropy()

  2. Or it should be y_head_cls as a confidence score of the model and then estimate the entropy using this prediction ?
    uncertainty_all_N = Categorical(probs = y_head_cls ).entropy()

Should we follow step 1 or step 2 for entropy based sampling ?

Thank you for your time and consideration.

@yuantn
Copy link
Owner

yuantn commented Apr 20, 2023

The confidence score is y_head_cls, which has been described sufficiently and clearly in the paper and code.

@yuantn yuantn added the good first issue Good for newcomers label Apr 20, 2023
@chiran7
Copy link
Author

chiran7 commented Apr 20, 2023 via email

@yuantn yuantn closed this as completed Apr 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants