Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the code for dense object generation #2

Open
KaiLong1 opened this issue Jan 15, 2023 · 3 comments
Open

the code for dense object generation #2

KaiLong1 opened this issue Jan 15, 2023 · 3 comments

Comments

@KaiLong1
Copy link

Hello, I am very interested in your study Sparse2Dense, especially the idea of generating dense point clouds. I want to learn more about the whole process of dense point cloud. Will you release the code for dense object generation?

@stevewongv
Copy link
Owner

Hi, thanks for your interest.

I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part

gt_point[:,:3] = box_np_ops.rotation_points_single_angle(gt_point[:,:3],(np.pi/2+box[-1]),axis=2)
)

@QingXIA233
Copy link

Hi, thanks for your interest.

I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part

gt_point[:,:3] = box_np_ops.rotation_points_single_angle(gt_point[:,:3],(np.pi/2+box[-1]),axis=2)

)

Hi, does this mean the well-organized code for dense object generation won't be released in the coming weeks? If so, I wonder whether you could perhaps provide some key scripts for us to complete the whole generation by ourselves with less effort. This may save you some time for cleaning the code, meanwhile elinimating more potential confusion about this issue.

Apart from your Sparse2Dense method, I also tried to perform dense object generation with a more direct way: I use WOD 5 sweeps data as input, for these 5 frames, fuse 4 frames to the current frame (use ego pose information to get Trans and Rot matrix to perform the pose transformation). My results with full WOD data training :
result

The reason I ask for your generation code is that I would like to do experiments on the full WOD dataset with Sparse2Dense to compare the performances of improving the original CenterPoint-Voxel model. Thanks in advance.

@shimingze
Copy link

Hi, thanks for your interest.
I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part

gt_point[:,:3] = box_np_ops.rotation_points_single_angle(gt_point[:,:3],(np.pi/2+box[-1]),axis=2)

)

Hi, does this mean the well-organized code for dense object generation won't be released in the coming weeks? If so, I wonder whether you could perhaps provide some key scripts for us to complete the whole generation by ourselves with less effort. This may save you some time for cleaning the code, meanwhile elinimating more potential confusion about this issue.

Apart from your Sparse2Dense method, I also tried to perform dense object generation with a more direct way: I use WOD 5 sweeps data as input, for these 5 frames, fuse 4 frames to the current frame (use ego pose information to get Trans and Rot matrix to perform the pose transformation). My results with full WOD data training : result

The reason I ask for your generation code is that I would like to do experiments on the full WOD dataset with Sparse2Dense to compare the performances of improving the original CenterPoint-Voxel model. Thanks in advance.

Hello, I see that you have successfully run the code for sparse2dense. I would like to know which set of Waymo data the author used for their experiments, as the entire Waymo dataset is too large. I have the data from the 'gt' folder provided by the author, but I am unsure which set of Waymo data it corresponds to. Could you please let me know?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants