Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to run the eval part of the program #2

Open
libingDY opened this issue May 20, 2021 · 5 comments
Open

how to run the eval part of the program #2

libingDY opened this issue May 20, 2021 · 5 comments

Comments

@libingDY
Copy link

There are two Eval programs in your project. How can I debug them

@ajtejankar
Copy link
Collaborator

I didn't understand what you mean by debug them, but here is the command to run eval_linear.py:

python eval_linear.py \
  -j 16 \
  -b 256 \
  --arch resnet50 \
  --weights <path to the checkpoint> \
  --save <path to the directory where experiment output will be saved> \
  <path to the imagenet dataset root>

Command to run eval_knn.py:

python eval_knn.py \
    -j 16 \
    -b 256 \
    --arch resnet50 \
    --weights <path to the checkpoint> \
    --save <path to the directory where experiment output will be saved> \
    <path to the imagenet dataset root>

These are example command for running. To change the command line parameters from their default value you can run above commands with the --help.

@libingDY
Copy link
Author

Thank you. Then I find a promeble in run python eval_knn.py and python eval_linear.py, the eval_linear.py is very low, but the python eval_knn.py result can get 0.95. this is why? I run in my data (10 class)

@ajtejankar
Copy link
Collaborator

Hi @18456432930,

Sorry, I didn't understand the problem. Can you be more specific? For instance, could you reproduce the numbers in our paper with our models? What augmentation does your model use? Some tensorflow models do not require input normalization but our code uses input normalization. Did you mean 0.95% or 95% with eval_knn.py? We use the standard PyTorch model definitions so can you confirm if the model definition used for the checkpoint is compatible?

Thank You
Ajinkya Tejankar

@libingDY
Copy link
Author

First my mean is 95%, then I is only change the code input(change imagenet to my data) and begin to run. Finally i can get a result(run eval_linear.py can get 30% acc, run eval_knn.py can get 95% acc).No changes have been made elsewhere in your code.I'm very sorry about my English expression.

@ajtejankar
Copy link
Collaborator

ajtejankar commented May 25, 2021

I see. Our linear evaluation code uses a trick of normalizing the features by subtracting mean and std over the entire dataset. The mean and std are calculated on the l2 normalized features. This could be a problem for your model. I am not sure though. Perhaps, you can try the linear evaluation code from the official MoCo repository.

Don't worry about the English :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants