-
Thanks for this great work! congratulations! I'm trying to train nerf models on scenes with dynamic objects, and I noticed the flag So I tried masking out the dynamic objects with 0 alpha value, but the counterpart area on the rendered scene is sometimes black and sometimes gray(partly transparent). I'm wondering am I doing it right? And why don't you just skip the background area while training instead of training it with random color? Would it be better as it saves calculations? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Why random background colors?Hi there, alpha==0 supervises (enforces) transparency, which conflicts with your goal of simply ignoring regions of some of the training images. Using random colors rather than fixed colors improves this process, because the model can't just predict a solid color. It is forced to learn transparency in order to let the randomized colors "shine through". How to mask away training data?This codebase does allows specifying masks for ignoring training pixels, but this is distinct from marking background regions, i.e. serves a different goal. The format is as follows: for any training image (You don't need to include blank masks for training images that don't require any masking.) I will add this to the FAQ section of the readme |
Beta Was this translation helpful? Give feedback.
-
@Tom94 Why masking out the dynamic objects with 0 alpha cannot make the object transparent? why it only works for the background? Asking this because I found that if simply masking away training pixels, the rendered area back of the masked object can have artifacts, especially when the masked object is very close to the camera. I guess the reason is that the corresponding feature grid cannot be optimized? |
Beta Was this translation helpful? Give feedback.
Why random background colors?
Hi there, alpha==0 supervises (enforces) transparency, which conflicts with your goal of simply ignoring regions of some of the training images.
Using random colors rather than fixed colors improves this process, because the model can't just predict a solid color. It is forced to learn transparency in order to let the randomized colors "shine through".
How to mask away training data?
This codebase does allows specifying masks for ignoring training pixels, but this is distinct from marking background regions, i.e. serves a different goal.
The format is as follows: for any training image
xyz.*
with moving objects, you can provide a mask by including a filedyna…