-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metashape equirectangular issue #1634
Comments
I didn't have a dataset to test it, so this was just guessing from the doc. Can you share a sample dataset? |
Hey Frédéric, Sure, I will send you a DM on Discord with the dataset. Thanks! |
ok! my ID on discord is the same as on github (f-dy) |
I sent yesterday, but it's probably gone in to your message requests folder. Thanks! |
I still don't see it, so maybe I don't know how to use discord. Can you send me a link by email (frederic dot devernay at m4x dot org) or maybe post a link here to a public equirectangular dataset, if such a think exists (I couldn't find one). My guess is that there's a 90° or 180° rotation issue between nerfstudio and metashape. It would be nice to have a freely available multi-view panoramic dataset for nerfstudio 😉 |
I just emailed it to you, thanks! |
Any update on this? |
Yah can't get this to work I am able to convert the metashape file but its almost like its not knowing its a 360 image |
Bumping this to see if we can get anything? |
I'm looking at this. I have to replicate the way it is done for COLMAP (ns-process-data creates multiple perspective views per equirectangular image). |
OK here's the solution. TL;DR, nerfstudio doesnt't support the equirectangular sensor, but knows how to make planar projections from these images. Here's my recipe, tell me if you can reproduce it. I'll submit a PR with documentation for that.
|
Thanks for looking in to this. If I understand correctly, this wouldn't use the equirectangular alignment from Metashape at all, and would instead just use ns-process-data to slice the equirectangular images into 8 or 14 perspective views, and align those in Metashape with the regular "frame" camera type? |
Nerfstudio does support equirectangular sensors - nerfstudio/nerfstudio/cameras/cameras.py Line 47 in 7887b14
|
How can we get this to work? is there a command to let nerf know its a EQUIRECTANGULAR when we are processing it. |
The camera type can be specified in the json - https://docs.nerf.studio/en/latest/quickstart/data_conventions.html |
Yes but when we do ns-train it doesn't know the camera-type |
Assuming you are using the nerfstudio dataset format,
|
Do you have any examples of a nerfstudio Attached an xml exported from Metashape using the spherical camera model below. |
Attaching as txt file, as can't upload xml here |
@tancik setting the camera_model is exactly what I did previously, here:
But unfortunately, Should we just add EQUIRECTANGULAR to the CameraModel Enum? |
I'm testing this. |
Yea, the reason it was like that before was because the COLMAP scripts use a perspective model when you specify |
OK so I updated #1841 with the solution for Metashape, but it probably broke the COLMAP solution to equirectangular, so I converted it to draft in the mean time |
@tancik what if I simply add |
ok this works. COLMAP works again: Updated #1841 - ready to review |
Is the camera path what you expect? |
Yes, the visualization in the viewer is correct as are the images loaded. |
Hmm, I'm at a bit of a loss then. |
What version of metashape are you using? If you are using 2.0 and above you need to remove some lines from the xml I believe I think if was lines 14-17 Then process the xml. |
Yes, I'm using 2.0 Metashape. Wow! I'm only at 250 steps, but I can already see some shapes. Also, this should be fixed in the ns-process-data script. |
gradeeterna actually figured it out because all his stuff was for 1.8 and I was using 2.0 so we knew something was different. |
Can you share the camera.xml file and the edit you did? That's weird cause I wrote and tested the code with Metashape 2.0 |
Hey, it turned out to not be related to Metashape versions, as I had the issue with both 1.8 and 2.0. The issue is that some spherical camera.xml files contain these "calibration" lines 14-17, where the "f" number messes things up. I'm not sure why some have this, and some don't. The sample data I sent you didn't have this.
For now, I'm just manually deleting these lines before running |
just a follow up on that equirectangular export from metashape but ns-process-data doesn't work correctly. images are parse but nothing (extrinsic, intrisic cam model) is derived from the xml to the transform.json file which is empty. |
can you share cameras.xml and transforms.json? spherical sensors don't have a calibration, and this is already handled here so the error must be somewhere else. Note that I don't recommend building a NeRF from omni images, unless your scene is far from the camera: those cameras do not have a single optical center, and this creates blurry radiance fields if the scene is too close |
Hi @devernay please find the tranforms.json attached. As a result of the command it's an empty json. and please also find xml generated by metashape (here with a version 2.0.2 so with the most recent format , not in version 1.8 or earlier). you will notice there is not longer calibration label in the xml. source project comes from metashape. as you mentioned
but this should be handle either by https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/process_data/metashape_utils.py#L62 or an alternative way if using gaussian splatting (that script > https://github.com/agisoft-llc/metashape-scripts/blob/master/src/export_for_gaussian_splatting.py ) here is what gives the command into nerfstudio that convert metashape xml info into COLMAP format so first part of the script (dowscaled images to 2, 4, 8) is working However from my understanding both scripts (resp the one used in NERFSTUDIO and the other used to prepare dataset for Gaussian Splatting) don't manage cam model in equirectangular format (spherical). so both methods generated intermediate cube map models to get pinhole images. This is time consuming and also leads to more images to train (x 6 with cube map model). so when one wants to work with a 360° cam such as INSTA360 RS ONE, there is 2 solutions to use such imageries. B/ use equirectangular source imageries from 360°cam, and use generic command from NERF that will convert equirect model into cube map model then launch COLMAP for SFM preprocess (initial alignment). this is longer to execute (data preparation , intermediate images x6 ), hence because of last step , SFM step takes a while vs metashape. ns-process-data images --camera-type equirectangular --images-per-equirect {8, or 14} --crop-factor {top bottom left right} --data {data directory} --output-dir {output directory} |
ok this happens because all your cameras are in a group called "insta360" and groups are not handled. Just remove the line |
I will test it . |
@f-dy
and if I apply your solution it doesn't work neither xml is available here |
It works for me: My guess is that the images are not available in the images directory you're pointing to? Verbose execution will tell you more. In the meantime, I made PR: #2626 |
If you're trying to calibrate the raw fisheye images, you should check out: |
not sure I understand why we should absolutly use CAM RIG method. Is it because cam group concept from metashape is not well handled by ns-process-data metashape method ? |
still.... |
the camera rig means that the pose of each slave camera within the rig is fix with respect to the master camera, so that if you have a rig with 6 cameras, and take pictures from n rig positions, theres only 5+n sets of extrinsics to compute, instead of 6*n. Please execute ns-process-data with the |
which means a faster SFM processing step with colmap ? what is the benefit for NERF to get less extrinsics to compute ? |
You mean metashape? |
in metashape_utils.py, after the line
add:
You can also print image_filename_map to see what it contains. My guess is that it can't find the images |
this is true. in outdoor conditions you can also geotag each camera (with an gnss device). that was our scenario here. SFM is also speed up when you give pre-position during initial alignement. Here again you have to take into account a leverarm between gnss and optical center or entrance pupil. |
You don't have to know the exact geometry: Metashape optimizes the intra-rig camera poses too |
btw @antoinebio I noticed that in your metashape project you used the same sensor for the "AV" and "AR" groups, but from the photos above, I guess these are actually different sensors. You should have as many sensors as there are physical cameras in your metashape project |
Correct I will also try your strategy
without group label in the xml (as suggested) it's copying the source images to the destination folder but at the end of the process... and transforms.json is still empty... |
@f-dy Hello, I have the same issue and, by adding this, I get a "Missing image" line for every camera. I don't get how I should be able to fix it though. For more info regarding my issue, you can find it here #3573 |
Hey,
I was just testing the updated Metashape support with equirectangular images, but it doesn't seem to be working correctly.
#1605 @f-dy
Here is the point cloud in Metashape. camera type set to spherical:
And the NeRF from the same viewpoint after around 5K steps (I also trained for 30K but looked similar):
The text was updated successfully, but these errors were encountered: