Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 1.78 KB

2412.07293.md

File metadata and controls

7 lines (5 loc) · 1.78 KB

EventSplat: 3D Gaussian Splatting from Moving Event Cameras for Real-time Rendering

We introduce a method for using event camera data in novel view synthesis via Gaussian Splatting. Event cameras offer exceptional temporal resolution and a high dynamic range. Leveraging these capabilities allows us to effectively address the novel view synthesis challenge in the presence of fast camera motion. For initialization of the optimization process, our approach uses prior knowledge encoded in an event-to-video model. We also use spline interpolation for obtaining high quality poses along the event camera trajectory. This enhances the reconstruction quality from fast-moving cameras while overcoming the computational limitations traditionally associated with event-based Neural Radiance Field (NeRF) methods. Our experimental evaluation demonstrates that our results achieve higher visual fidelity and better performance than existing event-based NeRF approaches while being an order of magnitude faster to render.

我们提出了一种基于高斯投影(Gaussian Splatting)的方法,将事件相机数据应用于新视角合成。事件相机具有卓越的时间分辨率和高动态范围,利用这些特性,我们能够有效应对快速相机运动下的新视角合成挑战。 在优化过程的初始化阶段,我们采用事件到视频的模型(event-to-video model)所编码的先验知识。此外,我们使用样条插值来生成事件相机轨迹中的高质量姿态,从而提升快速运动相机的重建质量,同时克服传统事件驱动的神经辐射场(NeRF)方法所面临的计算限制。 实验评估表明,与现有的基于事件的 NeRF 方法相比,我们的结果在视觉保真度和性能上表现更佳,同时渲染速度提高了一个数量级。