-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dense Captioning Evaluation on VG Dataset #6
Comments
To evaluate the result, you can use the docker as provided in jcjohnson/densecap#95 |
Hi, Jialian. I have built the environment for densecap using the docker image as provided. Could you please tell me more specifically how you use the output/grit_b_densecap/vg_instances_results.json or checkpoint file to evaluate on it so as to get the mAP result? Found an evaluator of VG in python |
In their Lua evaluation code, there is a place that obtains model inference results. We replace it with our result read from json. |
Thanks a lot. |
Hi, Jialian! Have you ever met this problem : "attempt to concatenate a nil value" in eval_utils.lua ? I met this when evaluating the model on image 63.jpg while reading ground truth It seems likes that it is a problem caused by nn.LanguageModel.idx_to_token when the idx=10579, the token is "nil", which will cause error when being concatenated. |
I only changed the code to read the results json file, and no other changes. I didn't have errors in the evaluation code. |
Could you please provide me the evaluation code you edit so i could replace it in my docker? Thanks |
Can you show more details on how you hit the issue? For example, share the full error stack. |
I found the error: I didn't replace the vocab_size and idx_to_token in LM model, thank you all so much. |
@Evenyyy 您好,请问一下您可以提供一下您编辑的用vg_instances_results.json文件在DenseCap评估的代码嘛,非常感谢您!两位大佬上面的聊天我还是没看太懂 |
Hello,
I am currently tring to reproduce the result of task dense captioning of GRiT. I have trained the model by default setting and got the checkpoint of it. Then I ran inference on VG test set and got the json result by
python train_net.py --num-gpus-per-machine 8 --config-file configs/GRiT_B_DenseCap.yaml --output-dir-name ./output/grit_b_densecap --eval-only MODEL.WEIGHTS models/grit_b_densecap.pth
However, when installing the environment of DenseCap, I was stuck in the installation of torch on my GPU machine which has a CUDA version of 12.0. I always met this error:
Make Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_cublas_device_LIBRARY (ADVANCED)
linked by target "THC" in directory /root/torch/extra/cutorch/lib/THC
Could you tell me what platform you use to install DenseCap and perform evaluation?
Thanks a lot!
The text was updated successfully, but these errors were encountered: