-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proper settings for LineMOD Occlusion #2
Comments
Hi, I re-implement the framework using mmcv, and run experiments on LINEMOD, training one model for each object. |
Hi! Really appreciated for the results share! Actually, I was talking about Occluded-LineMOD in the issue, where only part of the occluded objects are annotated in a subset of LineMOD. Have you conducted experiments upon Occluded-LineMOD? How are the results like? And could you please tell me your training settings (train dataset, number of epochs, etc)? I can hardly reproduce the result in the paper. |
Sorry, I have not tested on Occluded-LineMOD. But I think the augmentation matters. Furthermore, It seems you use the Pbr synthetic data to train, which may also cause the wrong result. Anyway, I will test on Occluded-LineMOD. Please stay tuned. |
Thanks for your work! |
Hello, thank you for your work!
In your paper, experiments on Linemod-Occlusion have been conducted. However, the configs and scripts are not contained in this repo, so I've implemented one baseline for that (See my fork), but the results are below expectation. For example, on
ape
,ADI-0.10d
gets only about 8% when trained on10 / 30
epoch. I think improper configurations may be to blame (the config is merely modified according to SwissCube Dataset).Could you please provide corresponding scripts and configs so that I can reproduce the results in paper?
The text was updated successfully, but these errors were encountered: