Would the unit cubes of two different models of the same object be the same if scales are the same and both datasets good enough? #757
-
I have been trying to figure out if NERF ends up computing the same unit cube and "center" if we run it on two different sets of images of the same object, as long as the datasets are adequately prepared but the specific angles & positions of cameras with respect to the object might differ. In my own tests, it certainly seems like NERF computes (at least approximately) the same unit cube for the same object but I don't know if this happened by chance or if NERF actually works deterministically in this regard. Also, if the computed unit cubes are considerably different, is there already a way to transform one model into the other, i.e. calculate a transform matrix to change the coordinates of one model (the cube, cameras, etc.) into coordinates with respect to the coordinate system of the second model? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi there, this question would be better asked over at the COLMAP repository, assume you are using the I could speculate about the details of the colmap pose reconstruction (my hunch is that the relative scale is the same -- assuming focal length is the same across image sets -- and that the center point and rotation is arbitrary), but I can't tell you for certain. |
Beta Was this translation helpful? Give feedback.
Hi there, this question would be better asked over at the COLMAP repository, assume you are using the
colmap2nerf.py
script to determine camera poses.I could speculate about the details of the colmap pose reconstruction (my hunch is that the relative scale is the same -- assuming focal length is the same across image sets -- and that the center point and rotation is arbitrary), but I can't tell you for certain.