Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.29 KB

2410.02103.md

File metadata and controls

5 lines (3 loc) · 3.29 KB

MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis

Recent works in volume rendering, \textit{e.g.} NeRF and 3D Gaussian Splatting (3DGS), significantly advance the rendering quality and efficiency with the help of the learned implicit neural radiance field or 3D Gaussians. Rendering on top of an explicit representation, the vanilla 3DGS and its variants deliver real-time efficiency by optimizing the parametric model with single-view supervision per iteration during training which is adopted from NeRF. Consequently, certain views are overfitted, leading to unsatisfying appearance in novel-view synthesis and imprecise 3D geometries. To solve aforementioned problems, we propose a new 3DGS optimization method embodying four key novel contributions: 1) We transform the conventional single-view training paradigm into a multi-view training strategy. With our proposed multi-view regulation, 3D Gaussian attributes are further optimized without overfitting certain training views. As a general solution, we improve the overall accuracy in a variety of scenarios and different Gaussian variants. 2) Inspired by the benefit introduced by additional views, we further propose a cross-intrinsic guidance scheme, leading to a coarse-to-fine training procedure concerning different resolutions. 3) Built on top of our multi-view regulated training, we further propose a cross-ray densification strategy, densifying more Gaussian kernels in the ray-intersect regions from a selection of views. 4) By further investigating the densification strategy, we found that the effect of densification should be enhanced when certain views are distinct dramatically. As a solution, we propose a novel multi-view augmented densification strategy, where 3D Gaussians are encouraged to get densified to a sufficient number accordingly, resulting in improved reconstruction accuracy.

近年来,体积渲染的工作,如NeRF和3D高斯散射(3DGS),在学习到的隐式神经辐射场或3D高斯的帮助下,显著提升了渲染质量和效率。基于显式表示进行渲染,原始3DGS及其变体通过在训练期间每次迭代中采用单视图监督优化参数模型,实现了实时效率,这一策略源自NeRF。然而,某些视图容易出现过拟合,导致新视图合成时表现不佳,并且3D几何形状不够精确。为解决上述问题,我们提出了一种新的3DGS优化方法,包含四个关键创新贡献:1) 我们将传统的单视图训练范式转变为多视图训练策略。通过我们提出的多视图调节机制,3D高斯属性得到了进一步优化,避免了对特定训练视图的过拟合。作为通用解决方案,我们在多种场景和不同的高斯变体中提升了整体精度。2) 受到额外视图带来的好处的启发,我们进一步提出了一种跨内在指导方案,推动了针对不同分辨率的粗到细的训练过程。3) 基于我们多视图调节的训练方法,我们提出了一种跨光线加密策略,在从选定视图的光线相交区域中增加更多的高斯核。4) 通过进一步研究加密策略,我们发现,当某些视图差异显著时,加密的效果应得到增强。为此,我们提出了一种新颖的多视图增强加密策略,鼓励3D高斯根据需要加密到足够的数量,从而提高重建精度。