Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the pre-trained weights #2

Open
LZL-CS opened this issue Sep 15, 2023 · 9 comments
Open

Question about the pre-trained weights #2

LZL-CS opened this issue Sep 15, 2023 · 9 comments

Comments

@LZL-CS
Copy link

LZL-CS commented Sep 15, 2023

Hi @Tao-11-chen,

Thanks for your excellent work!
Can you share your trained weights for the pose_regressin model and pose_optimation model so that I can evaluate the results?

@Tao-11-chen
Copy link
Collaborator

Sure, the pre_trained weights will be pulished soon. I,ll test this method on public datasets like Cambridge and 7scenes too.

@LZL-CS
Copy link
Author

LZL-CS commented Sep 19, 2023

@Tao-11-chen Thanks for your reply. I trained all the models based on the READ.me file. But I have some questions:

  1. When I was training the module "Pose Regressor", I visualized the loss by Tensorboard and got the final best results about rotation and translation. The loss curve is quite volatile. Do you think there is something wrong with this result?
image image image
  1. In the module "Pose Optimization", it seems that your code doesn't use the init_poses trained by the "Pose Regressor" module. Actually, you just perturb ground truth poses, then use it as init_poses as input to trained mega_nerf, which is the same way as iNeRF. Thus I am confused as to why it is different from your paper's architecture.
image

Maybe I missing something important, I appreciate any help you can provide!

@jike5
Copy link
Owner

jike5 commented Sep 19, 2023

@LZL-CS Hello, I can answer your question two.
In the paper, there are two experiments related to pose optimization, one in Experiment C section and one in Experiment D section. The code we provide is by default performed with the Experiment D settings, which is 'just perturb ground truth poses', but of course you can set this parameter pose_regressor_input to run with the PlaceRecognition as initial value

@LZL-CS
Copy link
Author

LZL-CS commented Sep 19, 2023

@LZL-CS Hello, I can answer your question two. In the paper, there are two experiments related to pose optimization, one in Experiment C section and one in Experiment D section. The code we provide is by default performed with the Experiment D settings, which is 'just perturb ground truth poses', but of course you can set this parameter pose_regressor_input to run with the PlaceRecognition as initial value

@jike5 Hi, thanks for your reply, I will check it now.

@Tao-11-chen
Copy link
Collaborator

@LZL-CS The images generated by nerf for data augmentation are random for pose regressor, which makes the loss volatile. However, your result is a bit strange, i'll check the code later.

@LZL-CS
Copy link
Author

LZL-CS commented Sep 21, 2023

@LZL-CS The images generated by nerf for data augmentation are random for pose regressor, which makes the loss volatile. However, your result is a bit strange, i'll check the code later.

@Tao-11-chen Hi, thanks for your reply.

  1. Firstly, I visualize the rendered results from the trained mega_nerf model, and it looks well:
image image
  1. Then, in Pose Optimization, I set the --pose_regressor_input option to use the Pose Regressor module's output as the initial value. I re-train the model, but the results seemed so weird as before:
image image image

Maybe I missing some key points, and I appreciate any help you can provide to resolve this issue!

@Tao-11-chen
Copy link
Collaborator

@LZL-CS Sorry, it's a bug in our code, i'll update the code these days.

@LZL-CS
Copy link
Author

LZL-CS commented Oct 7, 2023

@LZL-CS Sorry, it's a bug in our code, i'll update the code these days.

Hi @Tao-11-chen, may I know when you can update this code? Thanks!

@Tao-11-chen
Copy link
Collaborator

@LZL-CS
We sincerely apologize that our method is not as robust as initially thought. This was something we overlooked initially, it requires the regressor to provide more stable initial poses. However, after refactoring the entire code, the performance of the regressor has declined. It appears to be a parameter issue, but the random generation of the data augmentation generated by NERF within the regressor makes it hard to reproduce previous results. It might be helpful to replace the regressor with a more powerful one. Once again, we apologize for any inconvenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants