We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
I do a test SHEncoding for implementation 'tcnn' and 'torch', but they get different results.
To Reproduce
Run the following python script.
import torch from nerfstudio.field_components import encodings def test_tensor_sh_encoder(): """Test Spherical Harmonic encoder""" levels = 3 out_dim = levels**2 encoder_torch = encodings.SHEncoding(levels=levels, implementation='torch') encoder_tcnn = encodings.SHEncoding(levels=levels, implementation='tcnn') in_tensor = torch.ones((1, 3)).cuda() *2 encoded_tcnn = encoder_tcnn(in_tensor) encoded_torch = encoder_torch(in_tensor) print(encoded_tcnn.float()) print(encoded_torch)
Expected behavior
you will get the following result:
tensor([[ 2.8198e-01, -1.4658e+00, 1.4658e+00, -1.4658e+00, 9.8359e+00, -9.8359e+00, 8.2031e+00, -9.8359e+00, 5.9605e-08]], device='cuda:0', grad_fn=<ToCopyBackward0>) tensor([[0.2821, 0.9772, 0.9772, 0.9772, 4.3702, 4.3702, 3.4693, 4.3702, 0.0000]], device='cuda:0')
Additional context
I think the torch implementation doesn't align with the description of Appendix A2 Polynomial Forms of SH Basis.
the torch implementation is(https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/utils/math.py#L57):
components[..., 1] = 0.4886025119029199 * y components[..., 2] = 0.4886025119029199 * z components[..., 3] = 0.4886025119029199 * x
but the tcnn implementation is(https://github.com/NVlabs/tiny-cuda-nn/blob/master/include/tiny-cuda-nn/common_device.h#L348):
data_out(1) = (T)(-0.48860251190291987f*y); // -sqrt(3)*y/(2*sqrt(pi)) data_out(2) = (T)(0.48860251190291987f*z); // sqrt(3)*z/(2*sqrt(pi)) data_out(3) = (T)(-0.48860251190291987f*x); // -sqrt(3)*x/(2*sqrt(pi))
the tcnn implementation is aligned with the description of Appendix A2 Polynomial Forms of SH Basis:
components[..., 1] and components[..., 3] are should be a negative value.
components[..., 1]
and components[..., 3]
Another difference is that tcnn has a preprocess for the directions input:
torch implementation:
x = directions[..., 0] y = directions[..., 1] z = directions[..., 2]
but the tcnn is:
x = data_in(0, i) * 2.f - 1.f, y = data_in(1, i) * 2.f - 1.f, z = data_in(2, i) * 2.f - 1.f,
The text was updated successfully, but these errors were encountered:
The torch version is known to be wrong. I was planning to send a fix but get directed to focus on other things.
Sorry, something went wrong.
No branches or pull requests
Describe the bug
I do a test SHEncoding for implementation 'tcnn' and 'torch', but they get different results.
To Reproduce
Run the following python script.
Expected behavior
you will get the following result:
Additional context
I think the torch implementation doesn't align with the description of Appendix A2 Polynomial Forms of SH Basis.
the torch implementation is(https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/utils/math.py#L57):
but the tcnn implementation is(https://github.com/NVlabs/tiny-cuda-nn/blob/master/include/tiny-cuda-nn/common_device.h#L348):
the tcnn implementation is aligned with the description of Appendix A2 Polynomial Forms of SH Basis:
components[..., 1]
and components[..., 3]
are should be a negative value.Another difference is that tcnn has a preprocess for the directions input:
torch implementation:
but the tcnn is:
The text was updated successfully, but these errors were encountered: