Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.46 KB

2407.03923.md

File metadata and controls

5 lines (3 loc) · 2.46 KB

CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images

Neural radiance fields (NeRFs) have received significant attention due to their high-quality novel view rendering ability, prompting research to address various real-world cases. One critical challenge is the camera motion blur caused by camera movement during exposure time, which prevents accurate 3D scene reconstruction. In this study, we propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed. Considering the actual camera motion blurring process, which consists of complex motion patterns, we predict the continuous movement of the camera based on neural ordinary differential equations (ODEs). Specifically, we leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object. Furthermore, we introduce a continuous deformable 3D transformation in the SE(3) field to adapt the rigid body transformation to real-world problems by ensuring a higher degree of freedom. By revisiting fundamental camera theory and employing advanced neural network training techniques, we achieve accurate modeling of continuous camera trajectories. We conduct extensive experiments, demonstrating state-of-the-art performance both quantitatively and qualitatively on benchmark datasets.

神经辐射场(NeRFs)因其高质量的新视角渲染能力而受到广泛关注,促使研究致力于解决各种现实世界情况。其中一个关键挑战是由于曝光期间相机移动引起的相机运动模糊,这阻碍了准确的3D场景重建。本研究中,我们提出了连续刚性运动感知高斯光滑(CRiM-GS),以实现从模糊图像中准确重建3D场景,并具备实时渲染速度。考虑到实际相机运动模糊过程,涉及复杂的运动模式,我们基于神经常微分方程(ODEs)预测相机的连续运动。具体而言,我们利用刚体变换来模拟相机运动,采用适当的正则化方法,保持物体的形状和大小。此外,我们引入了连续可变形的3D变换在 SE(3) 域中,通过确保更高的自由度,使刚体变换适应于实际问题。通过重新审视基本的相机理论并采用先进的神经网络训练技术,我们实现了对连续相机轨迹的准确建模。我们进行了广泛的实验,在基准数据集上定量和定性地展示了最先进的性能。