We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The input image is opened from KITTI. The intrinsics is a 3x3 matrix. Why is there a tensor size mismatch?
img = Image.open(img_path).convert('RGB') rgb = torch.tensor(np.asarray(img), device=self.device).permute(2, 0, 1).unsqueeze(0) / 255. depth = self.depth_model(rgb, self.intrinsics.unsqueeze(0))
Traceback (most recent call last): File "depth_predictor.py", line 50, in predict_depth depth = self.depth_model(rgb, self.intrinsics.unsqueeze(0)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "anaconda3/envs/cfgs/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".cache/torch/hub/TRI-ML_vidar_main/vidar/arch/networks/perceiver/ZeroDepthNet.py", line 765, in forward encoded_data = self.encode( ^^^^^^^^^^^^ File ".cache/torch/hub/TRI-ML_vidar_main/vidar/arch/networks/perceiver/ZeroDepthNet.py", line 465, in encode embeddings = [self.encode_embeddings(data, embeddings, scene, idx=i) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".cache/torch/hub/TRI-ML_vidar_main/vidar/arch/networks/perceiver/ZeroDepthNet.py", line 465, in <listcomp> embeddings = [self.encode_embeddings(data, embeddings, scene, idx=i) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".cache/torch/hub/TRI-ML_vidar_main/vidar/arch/networks/perceiver/ZeroDepthNet.py", line 498, in encode_embeddings embeddings = self.merge_embeddings(embeddings, sources) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File ".cache/torch/hub/TRI-ML_vidar_main/vidar/arch/networks/perceiver/ZeroDepthNet.py", line 305, in merge_embeddings cat = torch.cat(cat, -1) ^^^^^^^^^^^^^^^^^^ RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 28830 but got size 29234 for tensor number 1 in the list.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
The input image is opened from KITTI. The intrinsics is a 3x3 matrix. Why is there a tensor size mismatch?
The text was updated successfully, but these errors were encountered: