You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
When I use COLMAP to generate the transforms.json file for the images, I've noticed that it seems to drop (around 50%) of the images I give it.
I have looked into COLMAP and the colmap2nerf.py script but I cannot figure out why the transforms.json has fewer images than the original set of images.
Could someone shed some light on the process that COLMAP carries out to choose which images to use.
I am aware that there is some lower limit for the sharpness of the images. However, I am not sure if that is all that is used as criteria.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi all,
When I use COLMAP to generate the transforms.json file for the images, I've noticed that it seems to drop (around 50%) of the images I give it.
I have looked into COLMAP and the colmap2nerf.py script but I cannot figure out why the transforms.json has fewer images than the original set of images.
Could someone shed some light on the process that COLMAP carries out to choose which images to use.
I am aware that there is some lower limit for the sharpness of the images. However, I am not sure if that is all that is used as criteria.
Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions