Qiankun Gao1, 2, Yanmin Wu1, Chengxiang Wen1, Jiarui Meng1, Luyang Tang1, 2, 3,
Jie Chen1, 2 ✉️, Ronggang Wang1, 2, 3 , Jian Zhang1, 3 ✉️
1School of Electronic and Computer Engineering, Peking University
2Peng Cheng Laboratory
3Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology,
Peking University Shenzhen Graduate School
[arXiv
]
- [2024/12/04] Preprint available on arXiv!
To facilitate comparisons, we showcase the ground truth (left), the rendered results of our RelayGS (middle), and the rendered results of the prior state-of-the-art ST-GS (right) for 4 test views in the video below. Each view lasts 10 seconds, resulting in a total duration of approximately 40 seconds.
VRU_GZ_GT_RelayGS_ST-GS.mp4
It is worth noting that the ST-GS method requires the point clouds from all 250 frames as input to produce its reconstruction results. In contrast, our RelayGS method only utilizes the point cloud from the initial frame, yet achieves significantly better reconstruction results, demonstrating both its efficiency and superior performance.
Additionally, we have uploaded three separate videos to Google Drive for a more detailed examination:
- VRU_GZ_GT.mp4: The ground truth video.
- VRU_GZ_RelayGS_PSNR-28.06.mp4: The video rendered by our proposed RelayGS method.
- VRU_GZ_ST-GS_PSNR-27.32.mp4: The video rendered by the prior state-of-the-art ST-GS method, initialized using the sparse point clouds of all 250 frames.
- The upcoming code release will be based on this framework: https://github.com/Awesome3DGS/LibGS