You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thanks for your great work. I run the train.py file and get the following questions:
The training use pairs of images. front-end use network to predict pose. and backend use PVGO based on pypose. this is find for training with pairs of images. It looks like it only work on pairs of images thus it more like VO. PVGO is not applied to the past nodes. Am I right ?
For the test, should we use the same train.py code ? How do we extend the code to pose graph of all the past nodes ?
Thanks
The text was updated successfully, but these errors were encountered:
Hi, thank you for the questions. We refer to it as iSLAM to emphasize its front-back-end structure. The front-end component, TartanVO, is a VO network.
During training, the VO network processes each pair of adjacent images, while PVGO optimizes multiple frames together, constrained by their visual and inertial observations. After the back-end optimization, we backpropagate the remaining residuals to fine-tune the models. Therefore, only the frames in the current batch need to be included in PVGO.
I indeed used train.py for testing before (with a different set of arguments). As this is a little bit confusing, I wrote test.py and uploaded it to dev branch. It contains the code for applying PVGO on all frames.
Hello,
Thanks for your great work. I run the train.py file and get the following questions:
Thanks
The text was updated successfully, but these errors were encountered: