-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail to compile the inference code in gapartnet and some issues about demo.ipynb #19
Comments
Hi Mingfei, Thanks for your interest and your questions. About the first issue related to 'expand_csr', could you provide more details about your compilation, e.g. environment, package version, and any modification you performed. As for the second issue, it seems like you just need to split the processed data into different splits for train and test. |
Thank you for your kind reply! Package Version Editable project location absl-py 2.1.0 I think the problem is in epic_ops, which is installed by "https://github.com/geng-haoran/epic_ops", instructed in GAPartNet README.md. In epic_ops/epic_ops/expand.py,
AttributeError: '_OpNamespace' 'epic_ops' object has no attribute 'expand_csr'. The similar problem is also in epic_ops/epic_ops/reduce.py,
AttributeError: '_OpNamespace' 'epic_ops' object has no attribute 'segmented_maxpool', 'segmented_reduce'. I guass the above function is overwrited by writers, which is irrelevant to pytorch versions. I am looking forward to your reply. Best Regards |
Dear Haoran:
I am reproducing GAPartNet. After long compilation in data rendering and processing, I can get the overall Gapartnet dataset.
Then I try compiling the inference code, "sh gapartnet/train.sh", there is a mistake:
Traceback (most recent call last):
File "/home/junhuan/anaconda3/envs/gapartnet/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/junhuan/anaconda3/envs/gapartnet/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/junhuan/anaconda3/envs/gapartnet/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/shimingfei/GAPartNet/gapartnet/dataset/gapartnet.py", line 86, in getitem
file = apply_voxelization(file, voxel_size=self.voxel_size)
File "/home/shimingfei/GAPartNet/gapartnet/dataset/gapartnet.py", line 193, in apply_voxelization
voxel_features, voxel_coords, _, pc_voxel_id = voxelize(
File "/home/shimingfei/GAPartNet/epic_ops/epic_ops/voxelize.py", line 75, in voxelize
batch_indices, _ = expand_csr(voxel_batch_splits, voxel_coords.shape[0])
File "/home/junhuan/anaconda3/envs/gapartnet/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/shimingfei/GAPartNet/epic_ops/epic_ops/expand.py", line 13, in expand_csr
return torch.ops.epic_ops.expand_csr(
File "/home/junhuan/anaconda3/envs/gapartnet/lib/python3.8/site-packages/torch/_ops.py", line 569, in getattr
raise AttributeError(
AttributeError: '_OpNamespace' 'epic_ops' object has no attribute 'expand_csr'
It seems that there is a pytorch version mistake in epic_ops, and I install "torch==2.0.0" with cuda=11.8.
And then I also try a different version "torch==1.11.0" but it does not work.
The second problem is about data structure. After data processing, the data structure is
akb48
but in practice, the demanding data structure is
data_test (just a name)
I wonder if there is a correspondence code or logic, thanks.
The last one is about demo.ipynb. I can compile the code but I wonder it is relevant to the train/inference code "gapartnet/train.py", or it is about data processing.
I am looking forward to your reply.
Best Regards
The text was updated successfully, but these errors were encountered: